Security work is often most visible when something goes wrong: a compromised package, a leaked credential, a typosquatted extension, an abused automation token. In those moments, it becomes clear that software infrastructure is not abstract. It is operational, exposed, and trusted far more often than it is inspected.
Open VSX belongs to that category of infrastructure. Open VSX is an open source, vendor-neutral extension registry for tools built on the VS Code™ extension API. It powers a rapidly expanding ecosystem of AI-native IDEs, cloud development environments, and VS Code-compatible platforms, including Amazon’s Kiro, Google’s Antigravity, Cursor, IBM’s Project Bob, VSCodium, Windsurf, Ona (Gitpod), and others.
It is easy to think of an extension registry as a convenience layer around development tools. In reality, it sits much closer to the heart of the software supply chain. Extensions influence how developers write, test, and review code, and increasingly how they interact with AI-assisted workflows in modern development environments. They operate in proximity to source repositories, terminals, secrets, build systems, CI/CD pipelines, and cloud-connected services. From a security point of view, that proximity is decisive.
This is why the Eclipse Foundation has been investing in Open VSX security in a more structured and deliberate way. That includes working with members and partners such as AWS, Google, Cursor, and the Alpha Omega open source cybersecurity project, to strengthen the platform as usage continues to grow.
The objective is not to create theatrical security or burden publishers with ineffective controls. It is to reduce meaningful risk at the points where it enters the system. That means looking not only at what is published, but also at how it is published, how the platform itself is built and operated, and how quickly suspicious activity can be detected and contained.
That is the right way to think about shared developer infrastructure. One does not secure it by adding a single tool or a single policy, but by tightening the entire chain of trust.
A good place to start is the publication path itself.
For some time, extension ecosystems have depended heavily on post-publication reactions. A problematic extension is reported, investigated, and then removed or restricted. That model is understandable, but it is not sufficient once the ecosystem becomes large enough, and once the consequences of a malicious extension become more serious. In an environment where extensions may touch code, credentials, AI context, and developer workflows, waiting until after publication is not always an acceptable security posture.
So one of the most important changes we have introduced is a move toward pre-publication verification and scanning.
This matters because it changes the default assumption. Rather than treating publication as an open door followed by later review, we are adding controls before content is distributed. In practical terms, this includes similarity checks on extension names and namespaces, which help reduce typosquatting and impersonation. It includes secret scanning, so that extensions containing accidentally packaged credentials or tokens can be caught before they are made available. It also includes malware-oriented scanning, backed by tools and workflows that allow suspicious uploads to be quarantined and reviewed instead of immediately passing through the system.
This represents an important evolution in posture. It is not about assuming that publishers act in bad faith. Most do not. It is about recognizing that modern software supply chains are exposed to both malice and error. Security must account for both.
This is also why the supporting workflows matter as much as the scanners themselves. Detection without triage is noise. Control without review is fragility. We have therefore added the operational pieces needed to make these checks usable in practice: scanning workflows, administrative visibility, and support for asynchronous external scanners where deeper analysis cannot reasonably be performed in-line. That may sound procedural, but this is where many security programs succeed or fail. A control only improves security when it can be operated reliably.
The second area of work has been the integrity of the platform’s own build and release chain.
This receives less public attention, but it is foundational. If one is serious about supply chain security, one cannot focus only on the extension artifact and ignore the automation that produces, releases, and maintains the registry. Attackers are often patient in this regard. They do not always attack the front door. They look for the build script, the workflow token, the overly trusted dependency, and the release credential that lives longer than it should.
That is why we have hardened several parts of the Open VSX release process. Release automation now uses more trusted publishing patterns, reducing reliance on long-lived credentials. GitHub Actions and workflow dependencies have been pinned more carefully. Workflow token usage has been tightened. Continuous security assessment has been added at the repository level. In parallel, we have reduced exposure in the build chain itself, including disabling Yarn scripts by default in places where lifecycle-script execution introduces unnecessary risk.
This is not glamorous work. It does not produce a dramatic interface change. But in security, the most effective improvements are often invisible: they reduce risk without demanding constant attention from users. That is a sign of maturity rather than modesty.
A third part of the effort has focused on containment.
No responsible security program assumes prevention is absolute. Credentials are still lost. Tokens are still mishandled. New attack patterns still emerge. The relevant question is therefore not whether incidents are possible, but whether the system is designed to limit their consequences.
Here too, Open VSX has become stronger. We now have the administrative capability to revoke a user’s personal access token when compromise is suspected. Authentication flows and token refresh handling have been improved. We have also moved further toward short-lived infrastructure access patterns, which is simply a better security model than reliance on persistent static credentials.
Alongside this, we have made smaller but important hardening changes to the application itself: tighter validation of publish-time content size, removal of stack traces from error responses, removal of version details from HTTP headers, and backend corrections to ensure temporary extension files do not remain where they should not. None of these measures, taken alone, would define a security program. Taken together, they reflect something more important: discipline. And discipline is what turns security from aspiration into engineering practice.
For the operations side of the audience, there is another point worth emphasizing. A registry is not exposed only through what it stores. It is also exposed through how it is used and how it is abused.
This is why service protection and observability have also been part of the work. We replaced static rate limiting with a more dynamic model, which is a more realistic response to abusive traffic and changing usage patterns. We also improved visibility into .vsix download activity, including monitoring intended to make unusual spikes and anomalous behavior more visible.
That is not separate from security. It is part of it. In shared infrastructure, availability and security are tightly connected. A service that cannot remain stable under pressure becomes, very quickly, a security problem as well as an operational one.
Finally, we have started to improve transparency around what Open VSX itself ships. The SBOM work underway is important for a simple reason: one cannot manage dependency risk with confidence without a reliable inventory of components. SBOMs are not a remedy in themselves, but they make vulnerability response more disciplined and give both engineering and operations teams a clearer basis for triage and remediation.
When I step back and look at the work completed so far, this is not a series of isolated controls. It is a shift in the security model.
We are moving from a posture that depended too heavily on reaction to one that is more preventive, layered, and operationally grounded. This includes reducing the likelihood of unsafe content being published, strengthening the build and release chain, tightening credential handling, hardening the service itself, and improving the visibility needed to detect and respond to abuse with more precision.
That is the standard Open VSX should meet.
It is important infrastructure in a part of the ecosystem where trust is both necessary and often granted too easily. Our responsibility is to make that trust better deserved. The work already completed is a meaningful step in that direction, and it gives us a firmer foundation for the work that remains.