There is a new term floating around software engineering circles: “Vibe coding.” In its original formulation, the idea was not simply “using AI to help write code.” It meant something more specific and more dangerous: Prompting an LLM to generate software, accepting the output, and largely “forget[ting] that the code even exists.” Simon Willison, who has been one of the sharper observers of this phenomenon, drew the distinction explicitly in March and May 2025, warning that AI-assisted development is not the same thing as surrendering accountability for the resulting code. In vibe coding, the user simply prompts an AI tool with a problem to be solved, and trusts the AI model to “do the right thing.” It’s a black box creating a black box.
That distinction matters because modern software governance has moved in exactly the opposite direction. Over the last several years, regulators, customers, procurement offices, and security teams have all been demanding greater software transparency, not less. The Software Bill of Materials, or SBOM, is the flagship artifact of that movement. CISA describes SBOMs as a core tool for software component transparency, and both CycloneDX and SPDX have matured into widely used standards for describing what software contains, including components, dependencies, and related metadata. It’s all about knowledge, auditability, and ultimately accountability. It’s intended to prevent secret malicious code (from cutting and pasting old code) as well as code that has vulnerabilities or weaknesses.
Put bluntly, vibe coding and SBOMs are not merely different practices. They are competing philosophies. One says, “Ship it; the model handled it.” The other says, “Before we trust this artifact, tell us what is in it, where it came from, what depends on it, and what risk it imports.” That is not a stylistic disagreement. It is the difference between improvisation and chain-of-custody. It’s the difference between knowing what’s in the sausage, or just knowing (or thinking) that it’s sausage.
The attraction of vibe coding is obvious. It lowers friction. It accelerates prototyping. It allows non-specialists, or even specialists working outside their deepest domain, to produce working software quickly. In low-risk settings, that can be a feature rather than a bug. Willison himself has made clear that rapid, AI-driven prototyping can be perfectly rational for experiments, toys, or disposable internal tools. His warning is not that AI assistance is inherently reckless. It is that the prototype that “mostly works” has an uncanny tendency to become tomorrow’s production system. Face it, we have been winging it in writing code since the 1940’s. With real consequences.
And that is where the SBOM problem begins.
An SBOM is supposed to answer a basic but essential question: What exactly is in this software? The NTIA’s earlier minimum-elements work and CISA’s more recent SBOM guidance both frame the issue as one of transparency, vulnerability management, and supply-chain risk reduction. OWASP’s CycloneDX materials go further, positioning SBOMs as operational artifacts that support vulnerability management and broader supply-chain assurance, while SPDX continues to evolve as an open standard for representing software systems and related security references.
That is manageable when developers know what they imported, why they imported it, and how the software is assembled. It becomes materially harder when an engineer — or worse, a business user with a coding assistant — prompts a model repeatedly until the code “works,” accepts broad changes wholesale, and never really inspects the dependency graph, license pedigree, provenance, or transitive packages. That is not just a code-quality issue. It is a software-inventory issue. It is a provenance issue. It is an attestation issue. And increasingly, it is a compliance issue. CISA’s 2025 update to the SBOM minimum elements, the continuing federal focus on software transparency, and Linux Foundation commentary tying SBOM usefulness to real-world compliance environments such as the EU Cyber Resilience Act all point in the same direction: organizations are expected to know their software, not merely deploy it.
The central irony is that AI can generate code faster than most organizations can govern it. Vibe coding compresses the time required to create software artifacts. SBOM practice, by contrast, requires disciplined enumeration of those artifacts. If AI turns one developer into five, but the organization’s inventory, review, signing, and dependency-management disciplines remain static, the governance gap does not stay the same. It widens. Rapid generation without corresponding transparency is not innovation; it is liability with a user interface. Indeed, we can use vibe coding to test for vulnerabilities in “known” and verified code. Maybe.
This is why the right comparison is not “vibe coding versus secure coding.” It is “vibe coding versus software accountability.” SBOMs are not magic. They do not tell you whether the code is good. They do not prove that a component is safe. They do not eliminate false positives, incomplete package metadata, or stale vulnerability references. But they do force one healthy discipline: naming the pieces. OWASP’s 2025 software-supply-chain guidance expressly treats centrally generating and managing SBOMs as part of the answer to software supply-chain failure. That is a governance function, not a programming trick.
There is another twist. AI-generated code may make SBOMs more important, but also less reliable if organizations are sloppy about how they are produced. A nominal SBOM generated at build time is only as useful as the completeness of the build process, the reproducibility of the environment, and the fidelity of the package-resolution logic underneath it. If the model injects code snippets directly, suggests vendored packages, rewrites lockfiles, or mixes ecosystem tooling in unusual ways, the resulting SBOM may look authoritative while missing critical context. SPDX’s ongoing evolution beyond traditional software artifacts and CycloneDX’s expansion into broader BOM use cases reflect a growing recognition that inventory must become richer and more expressive as software assembly becomes more complex.
So what should organizations do? First, stop pretending that “AI-assisted development” is a single category. There is a profound difference between a developer using an LLM as a pair programmer while reviewing every diff, and a user pasting errors into a model until a deployment succeeds. Willison’s critique is valuable precisely because it preserves that distinction. Not all AI-assisted programming is vibe coding. But true vibe coding — prompt, accept, deploy, ignore — is fundamentally at odds with modern software governance.
Second, treat SBOM generation as a build-governance control, not a paperwork exercise. An SBOM created after the fact, manually, for procurement theater, is of limited value. An SBOM integrated into CI/CD, tied to reproducible builds, linked to VEX or vulnerability disposition workflows where appropriate, and preserved as an auditable artifact is something else entirely. OWASP’s advisory on SBOM implementation emphasizes automation and real-time monitoring for vulnerability management, which is exactly where mature organizations need to be.
Third, insist on provenance and review controls around AI-generated code. The question is not merely “what packages are in here?” It is “what did the model cause us to include, change, or trust?” SBOMs answer part of that question, but only part. The rest requires signed commits, protected branches, policy-enforced dependency management, code review, and build attestation. In other words, SBOMs are necessary, but they are not sufficient. They tell you what is present. They do not tell you whether your development culture has abdicated responsibility.
That is the larger lesson. Vibe coding is not the enemy because it uses AI. It is the enemy because it tempts organizations to confuse software generation with software understanding. SBOMs, for all their limitations, are a rebuttal to that confusion. They embody a simple but stubborn proposition: If you cannot name what is in your software, you do not control your software.
And in 2026, “the model wrote it” is not a defense. It is an admission.
Recent Articles By Author
