What’s an OSS Vulnerability Janitor?
文章讲述了作者在开源软件行业中的工作经历,特别是处理未修复或未公开的安全漏洞。作者强调了安全漏洞披露和修复的重要性,并通过多个案例展示了由于缺乏CVE ID导致漏洞未被及时修复的问题。作者呼吁更多人参与这一“清理”工作,以提升开源生态的整体安全性。 2025-7-31 15:48:31 Author: infosecwriteups.com(查看原文) 阅读量:15 收藏

What does it take to sweep up after the industries security vulnerabilities that have been left unpatched or undisclosed?

Jonathan Leitschuh

A while ago, I changed my job title on LinkedIn to OSS Vulnerability Janitor.

It perfectly captures the kind of work I find myself doing over and over again in the OSS software industry: quietly cleaning up security messes that shouldn’t still exist — things that fell through the cracks, or were swept under the rug, or never properly acknowledged in the first place.

I love reading security vulnerability disclosures. I love digging into how software breaks in insecure ways. I enjoy reading the patches to understand how, and why the patch was implemented the way it was. But what I started noticing, over and over again, is this: a lot of OSS vulnerabilities don’t get fixed. Or they get do fixed, but quietly, but without a CVE. Or they get patched around downstream, while the root cause in a shared open source dependency stays vulnerable.

So I started doing something about it. I started sweeping up the vulnerabilities.

The CVE system is like this giant machine. Once a vulnerability is assigned a CVE, it enters a pipeline that kicks off a global process of disclosure automation. Security teams take notice. Tools like GitHub’s Dependabot and Mend’s Renovate light up and begin opening pull requests. Vendors start patching. Downstream consumers get alerts. The machine begins to churn.

But here’s the unfortunate nature of the beast: if a vulnerability never enters the pipeline, it often never gets fixed. It doesn’t show up in dashboards. It doesn’t raise alarms. It doesn’t exist in the industry’s shared mental model of “what we should be worried about.”

If you maintain open source software, do security research, or hunt for vulnerabilities, one of the highest‑impact actions you can take is to make sure the vulnerabilities you find actually get fixed downstream is by getting them assigned a CVE ID.

Getting a CVE assigned is often the critical step in protecting end users. It’s how the industry declares: this matters.

One of the first times this really hit me was when I revisited a 2017 critical remote code execution vulnerability discovered by Orange Tsai in GitHub Enterprise. It was a spectacular exploit chain, and GitHub had rightfully paid a hefty $5,000 bounty for it.

But years later, I went back to look more closely at the individual components of the chain. One of them involved a library called Graphite. I dug into it and realized: the vulnerable behavior was still there in the open source code.

GitHub had addressed the issue in their product, sure. But the underlying OSS library hadn’t been touched.

So I reached out to the maintainers in 2019 and said, hey, this SSRF is still live (link). Their response? “Oh yeah, we don’t even use that part of the code anymore. We can just remove it.”

And they did.

But how many people had unknowingly built software on top of that vulnerable code in the years since? How many more GitHub Enterprises were out there, quietly vulnerable, instead of benefiting from an upstream fix?

This is what I mean by janitorial work.

Another case that stuck with me involved a serious vulnerability in Eclipse Theia — a security researcher, @omespino, hunting against Google’s bug bounty program discovered a way to escalate from an XSS to full root access in Google Cloud Shell, which happened to be using Eclipse Theia under the hood. The write-up was published October 1, 2020.

Google patched it. And issued a $5,000 bounty.

But the vulnerability? Disclosed without a CVE. No advisory. Crickets from Eclipse.

In February of 2021, a full four months after disclosure, I ended up chasing down the vulnerability through GitHub issues, mailing lists, backchannels at Google, and eventually almost escalating to the MITRE CVE dispute process.

This was one case where I found myself quite frustrated with both the Google and Eclipse CNA team. This vulnerability had been exploited in the wild. It had been public since the beginning of October. However, the Eclipse Foundation had a policy of only assigning CVEs if the maintainers requested them explicitly. And the maintainers were unresponsive. The only thing standing between the industry and coordinated disclosure was… bureaucracy.

I care deeply about this work, and seeing bureaucracy get in the way of informing end-users about vulnerabilities in their OSS components drives me a little crazy.

After my pushing a CVE (CVE-2020–27224) was finally assigned.

Here’s the thing: vulnerability research isn’t always the thrilling, hacker-movie stuff people imagine.

More often, it’s about:

  • Waiting out a 90 day disclosure timeline, and hoping the world doesn’t catch on fire in the meantime
  • Chasing unresponsive maintainers and vendors
  • Submitting a GitHub issue, then sending an email, then another follow up email
  • Writing and rewriting an advisory to make it clear enough to matter
  • Dealing with the sometimes insane bureaucracy of trying to get a CVE assigned
  • Arguing with a CNA about why a CVE should be assigned

One critical thing I’ve learned is that it’s about closing the loop — making sure vulnerabilities are actually fixed, documented, and known to the broader community.

That work is often thankless. It’s slow. It’s annoying. And it’s absolutely critical.

One of the most frustrating things in security tooling — especially static analysis (SAST) — is how much noise we’ve normalized.

In many cases we’ve built this whole industry around flagging the symptoms of insecure defaults rather than fixing the actual root causes.

We see the same footguns pop up again and again in SAST output because the underlying libraries and APIs were never hardened. Why are we training engineers to duct-tape around dangerous behavior in libraries instead of fixing the damn thing upstream?

That’s one of the things I try to do now: follow the alert upstream, figure out why it exists, and patch the real bug.

It’s slow. It’s messy. It’s janitorial.

But every time a dangerous default gets fixed at the source, we eliminate hundreds or thousands of future alerts. We make security better for everyone — quietly, invisibly.

This problem is pervasive, and it’s something I keep running into again and again and again.

That’s why I created unCVEed — a project to document and track publicly disclosed vulnerabilities that likely should have CVEs but don’t. It’s a public triage point, a staging area, and a signal boost for overlooked security issues.

My goal is to raise awareness of vulnerabilities, build pressure, and eventually help push these issues through the CVE system so they can trigger the machinery that leads to actual remediation.

If you know of a vulnerability that hasn’t gotten the attention it deserves — if the system missed something — please consider opening an issue.

So why isn’t this kind of work more common?

Because it’s hard. It’s draining. It’s slow. You need tact and persistence and empathy and sometimes a thick skin.

Most devs are being paid to ship features, not fix industry-endemic issues.

You have to convince maintainers — often volunteers — that a feature they lovingly built is actually dangerous. That an API they designed ten years ago is actively harming people. That the impact of their decision isn’t theoretical — it’s dozens of CVEs, hundreds of security hours wasted downstream, and thousands of developers confused.

You have to have that conversation with kindness, but also conviction.

It’s not glamorous. But if we want the open source ecosystem to actually get better — not just shuffle risk around — we need more people doing this work.

I love open source.

I love that the same project can power a Fortune 500’s flagship product and help a high school student learn how to code.

That kind of scale and accessibility is magical. But it also means we owe it to the ecosystem to make sure it’s safe.

That’s why I keep doing this work — even when it’s exhausting. Even when it feels like yelling into the void.

Because someone needs to.

If this resonates with you — if you also find yourself cleaning up long-forgotten security issues, chasing down unpatched footguns, or yelling into the CVE void — you might be a vulnerability janitor too.

You’re not alone.

I’m involved in the Open Source Security Foundation (OpenSSF), (join us on Slack) particularly in the Vulnerability Disclosure Working Group. If this kind of work speaks to you, we’d love to have you join us. Or just shoot me a message. I’d love to hear your story.

We need more janitors.

Because if we don’t clean up the messes, who will?

If you found this post insightful reach out, I’m always curious to hear about software security, open source, or developer tooling. Happy to hop on a call to connect!

📧 Email: [email protected]
⛓️ LinkedIn: https://www.linkedin.com/in/jonathan-leitschuh/
📅 Schedule Time: https://calendly.com/jlleitschuh


文章来源: https://infosecwriteups.com/what-is-a-oss-vulnerability-janitor-b7ab176bdd3f?source=rss----7b722bfd1b8d---4
如有侵权请联系:admin#unsafe.sh