The funny thing about safety blankets is they can double as stage curtains for security theater.
“When will a cryptography-relevant quantum computer exist?” is a question many technologists are pondering as they stare into crystal balls or entrails.
Two people I admire recently made a public long bet about that question, with a $5000 donation to charity as stakes. The Register even covered this bet.
This public bet came on the heels of Google and Cloudflare both announcing a 2029 timeline for migrating fully to post-quantum cryptography. Meanwhile, a lot of uninteresting conflict has been occurring on mailing lists frequented by people with strong opinions about computer networking protocols, which has resulted in at least a few awkward mea culpa emails.
Last night, I opened an issue on the Fediverse Key Transparency project to discuss two options for post-quantum signatures in the protocol. Predictably, this led to a few folks insisting on discussing hybrids for post-quantum signatures (albeit off of GitHub).
So, let’s have a raw, honest talk about hybrid post-quantum KEMs and hybrid signature schemes.
I’m putting this in a separate section, up front, so that none of what follows confuses anyone about where I stand (because I know some jerk-ass is going to try to misconstrue my words).
I generally prefer hybrid KEMs–not out of any practical concern over ML-KEM’s security (or any other PQ KEMs, generally), but for reasons I’ll explain later in this blog post.
I do NOT prefer hybrid signatures. There is no analogous “harvest now, decrypt later” style attack with signatures.
Okay, are we on the same page? Do not proceed onto the rest of the blog post until you’ve internalized these two statements.
Also, I’m not really worried about quantum computers, but I do see other advantages to adopting post-quantum cryptography even if a quantum computer is never built.
The main arguments to justify hybrid constructions are purely hypothetical: “What if we’re wrong about the security of the newfangled lattice stuff, and we bet the entire house on that?”
When the stakes are high, and you’re dealing with the confidentiality of potentially decades of sensitive communications, hedging your bets against even unlikely failure modes just seems practical and reasonable.
The main risk of a future quantum computer is that they can break the encryption used now. This is often called “harvest now, decrypt later” (or HNDL because project managers have a love affair with acronyms).
A hybrid construction (i.e., X-Wing) is an example of one of these hedged bets. In order for X-Wing to be insecure, you need:
You can frame this argument mathematically as Security(X-Wing) >= min(A, B) where A is the security of ML-KEM-768, and B is the security of Curve25519.
These are reasonable and convincing arguments to make in the context of discussing a HNDL threat model. But they are not really cryptographic security arguments.
No one who makes this argument has, to my knowledge, raised a specific attack against lattices that doesn’t also break elliptic curves. If they have, I expect their novel cryptanalysis paper to materialize on IACR’s ePrint Archive instead of a mailing list debate.
If you had a quantum computer in 2050, you would not be able to exploit classical signature schemes at any point in time until 2050.
If those signatures were released under a {key, certificate, binary} transparency scheme, the potential for a forgery is even less likely.
Therefore, the same arguments that folks propose in favor of Hybrid KEMs do not carry over to Hybrid Signatures at all.
You either accept that lattices are secure, or you don’t. And you either trust the experts that participated in the international standardization effort, or you don’t.
Some people are earnestly claiming that lattice cryptography is too new to trust! So you must combine it with Curve25519, just to be sure.
The Curve25519 paper was published in 2005. The NTRU patent was filed in 1997. (This was before we had the AES block cipher, for flavor.)

The NIST post-quantum cryptography project was an international standardization effort launched in December 2016. There’s nearly a decade of cryptanalysis research and expert debate on pqc-forum.
Not all years are equal. In 2025, the IACR published nearly five times as many papers as in the year 2005. Anyone tackling novel cryptanalysis approaches today has the benefit of access to the knowledge that led to all the years prior.
The specific ML-KEM and ML-DSA designs might be relatively new, but they’re well-designed algorithms that were well-studied by mathematicians, cryptographers, computer scientists, and unconventional hackers for long enough to be considered well-understood today.

Sometimes, people will point out the attacks against SIKE as some indictment of the standardization process–when that’s actually evidence of the process working correctly.
SIKE was a strange duck to begin with: It was the only submission at the time that belonged to the isogeny class of problems, which was not widely understood by many cryptographers due to it being the newest problem at the table. It wasn’t selected for standardization, but was being promoted to the final round of KEMs under consideration. Many cryptography experts were unsure how to feel about it.
And then an obscure math paper from the 1990s broke it with a laptop over a weekend.
Whether a particular piece of technology gets adopted by any community is not actually a technical problem, it’s a political one.
Hybrid KEMs are an easier sell to people who are not cryptography experts than pure post-quantum KEMs for reasons that are mostly related to psychological safety than cryptographic safety.
This isn’t to say that those people are wrong to care about the things they care about!
People have to make the best of the cards they were dealt, and if you don’t have the deep technical background to make a decision about which cryptography configuration to guard against a kind of computer that doesn’t meaningfully exist yet, you’re going to take shortcuts.
Since confidentiality is at risk now to quantum computers in the indeterminate future, getting any PQ KEM (hybrid or pure) adopted sooner than later has a real-world security benefit downstream of the stakeholders making these judgment calls.
So advocating for hybrid KEMs is just the path of least resistance: Stop the bleeding now, let the experts quibble about the nerd shit you don’t particularly care about later.
However, there is no realistic harm in allowing pure ML-KEM or ML-DSA today.
I am very confident in the security of each post-quantum KEM being discussed today, but my order of preference is:
Although ML-KEM-512’s security is estimated somewhere in the ballpark of requiring queries to break today, I prefer to treat it as a sort of cryptanalysis canary, so I don’t include it.
When it comes to signatures, however, I’m stuck between:
And not considering hybrids at all.
The main reason to consider Ed25519 is if you’re a quantum computing skeptic that doesn’t believe the engineering problems will be surmountable in the foreseeable future.
A friend pointed out after I pressed Publish on this post:
Psychological safety only matters to a point.
Understanding the foundations of cryptography well enough to trust or distrust it is a technical matter best left to experts, not anyone’s particular feelings.
Buddy, my whole fucking career is managing implementation risks.
This is why I talk a lot about side-channels and other classes of cryptographic attacks that, strangely enough, the makers of so-called private messaging apps sneer at.
Implementation bugs plague every cryptosystem, not just new ones. WolfSSL just published a critical CVE that broke ECDSA when a newer signature algorithm (either EdDSA or ML-DSA) is also enabled. The root cause of this bug has more to do with how ECDSA is handled than anything; the newer algorithms just have implementation details that setup the attack path.
You can wax poetic about the potential impact of ML-DSA implementation bugs until the cows come home. We’ve been living in the critical RSA/ECC vulnerability landscape for decades.
Implementation bugs will happen, because they’ve already been happening.
On that note, ML-DSA is actually safer than ECDSA (which is what is actually deployed in WebPKI today, not EdDSA): it’s hedged against both nonce reuse and fault attacks by design. ML-DSA in practice is the best of RFC 6979.
Further, we know how to implement ML-DSA safely: both constant-time and detecting implementations with other failure modes with community-sourced test vectors.
Sure, no assurance practice can stop some slop vendor from “vibe-coding” their own cryptography protocols in Brainfuck and YOLOing their test suite, but those aren’t the kinds of cryptography libraries anyone with intact sanity should adopt to begin with.
Let’s humor the lattice skeptic arguments about pure ML-DSA and consider the case that a Certificate Authority decides to implement the same bug as early Dilithium (precursor to ML-DSA) reference implementations which allows anyone to recover the CA’s private key.
If this happens and escapes detection until after a certificate can be mis-issued, then we can rely on the existing Certificate Transparency and CA/Browser Forum mechanisms that already kill Certificate Authorities today.
If not by deliberate effort, then by natural selection, we will end up with secure implementations, dammit.
Meanwhile, Ed25519 was affected by a double public key oracle attack that allowed secret key recovery on the same order of magnitude as the 2018 Dilithium attack linked above.
Keeping systems vulnerable to quantum attacks out of fear of implementation faults isn’t a good trade-off, in my humble opinion.
With the industry moving towards post-quantum cryptography, any action we take today should avoid adding friction to that migraiton.
So, while I’m not personally worried about quantum computers breaking anything soon, I will be preferring ML-DSA-44 over Ed25519 in my current projects.
Header art by MrJimmyDaFloof.