The post-quantum cryptography conversation has gotten loud, and most of it is wrong. Vendors are selling "migration programs" to teams that have nothing to migrate yet. Standards bodies shipped enough to confuse procurement and not enough to deploy at scale. The small set of teams who genuinely need to move on PQC migration in 2026 are mostly not the ones being marketed to.
In my experience running TLS infrastructure, I've also been the person staring at a freshly published NIST FIPS at 11pm trying to figure out if it changes anything we ship next quarter. Here's what's actually happening in production handshakes right now, what to do this year, and what's safe to ignore.
Where Post-Quantum TLS Actually Stands in 2026
Production TLS today negotiates exactly one post-quantum algorithm in any meaningful volume: X25519MLKEM768, a hybrid key exchange combining classical X25519 with ML-KEM-768. According to Cloudflare's transparency reporting, this hybrid runs on roughly 30% of TLS 1.3 connections. Chrome enabled it by default in version 124 (April 2024), and OpenSSL 3.5 ships server-side support out of the box. Everything else is drafts and demos.
NIST finalized three algorithms in FIPS 203, 204, and 205 in August 2024:
| Algorithm | Former Name | Purpose | FIPS |
|---|---|---|---|
| ML-KEM | Kyber | Key encapsulation | 203 |
| ML-DSA | Dilithium | Digital signatures | 204 |
| SLH-DSA | SPHINCS+ | Hash-based stateless signatures | 205 |
Of those three, only ML-KEM has shipped in real handshakes at scale, and only inside a hybrid construction. The hybrid matters: if ML-KEM turns out to have a flaw (and lattice cryptanalysis has surprised researchers before), the X25519 half still gives you classical 128-bit security. This is not paranoia. It's the same reason we ran SHA-1 + SHA-256 transition periods.
Server-side options for 2026:
- OpenSSL 3.5+: ships X25519MLKEM768 by default
- BoringSSL: has shipped X25519MLKEM768 for over a year
- rustls 0.23+: via the rustls-post-quantum crate
- nginx 1.27 with OpenSSL 3.5: negotiates it without config changes
If you're behind a CDN, you almost certainly already terminate PQ-hybrid handshakes for browser traffic without having configured anything. What you cannot deploy in production today: post-quantum signatures in publicly-trusted certificates. The CAs aren't issuing them. We'll get to why.
The 'Harvest Now, Decrypt Later' Threat: Real or Overhyped?
Harvest Now Decrypt Later (HNDL) is real for a narrow band of traffic and overhyped for almost everything else. If your data has a useful lifetime measured in days or weeks, no nation-state is paying to store your TLS captures on the off chance a cryptographically relevant quantum computer arrives in 2032. If your data is identity records, IP, source code, or long-lived auth tokens, it's a different conversation.
The HNDL threat model assumes three things:
- An adversary who can capture your traffic
- Storage that survives a decade
- A quantum computer capable of running Shor's algorithm against a 256-bit elliptic curve
The first two are cheap. The third is not on any vendor's published roadmap with credible numbers. Most public estimates put a cryptographically relevant quantum computer somewhere between 2030 and 2040, and "estimates" is doing a lot of work in that sentence.
Honest filter for whether HNDL applies to you:
- Yes: government data, healthcare records, intelligence, defense contractors, biometric data, source code for long-lived products, financial data with multi-decade implications
- Probably: enterprise SSO tokens with long refresh windows, internal admin sessions, password resets if your users don't rotate
- No: marketing site traffic, e-commerce checkout flows, ephemeral session data, anything where the underlying value ages out within five years
If you're a SaaS company serving B2B customers, your customers might care even if you don't. That's a contractual problem, not a cryptographic one. The answer is the same either way: enable hybrid KEX where your stack supports it, because it costs almost nothing.
Why Signatures Are the Hard Problem, Not Key Exchange
Signature migration is where the actual pain lives, because it touches CA infrastructure, root programs, certificate sizes, and TLS record fragmentation in ways that key exchange doesn't. ML-DSA-65 signatures are about 3,300 bytes versus ECDSA's 70-ish. A typical full handshake with PQ signatures balloons from ~5KB to 15-20KB.
That sounds small until you remember TLS records max out at 16KB and a typical chain has the leaf, one or two intermediates, and a root reference. Replace those signatures and OCSP staples with ML-DSA, and you're fragmenting the ServerHello across multiple records. Multiple round trips. Worse handshake latency. Constrained clients (IoT, point-of-sale, anything on cellular) feel this immediately.
There's also CT log impact: every cert gets logged twice (precert + final), and CT logs already process millions of entries per day. Multiply each entry by 50x and you've created a real storage and bandwidth problem for the log operators.
The proposed signature alternatives all have tradeoffs:
| Algorithm | Signature Size | Tradeoff |
|---|---|---|
| SLH-DSA (SPHINCS+) | up to 50KB | Hash-based, conservative; unusable for online protocols at full size |
| ML-DSA-44 | ~2.4KB | Smaller, but only NIST security level 2 |
| Falcon (FN-DSA) | ~700 bytes | Requires floating-point math; side-channel risk during signing; not yet a FIPS |
Until one of these gets resolved, public CAs aren't issuing post-quantum signatures. Which brings us to the root program problem.
The CA/Browser Forum and Root Program Timeline
No publicly-trusted CA issues post-quantum signature certificates today, and none will until at least 2027 or 2028 based on current root program signals. The chicken-and-egg: browsers won't trust roots using algorithms they can't verify cleanly across all clients, and CAs won't bake roots into hardware tokens for algorithms whose final wire format might still shift.
Mozilla's policy thread on PQ algorithms (open in their GitHub policy repo since 2023) has converged on a position that Firefox won't add PQ-only roots until ML-DSA gets RFC-track standardization for X.509 use and the WebPKI ecosystem demonstrates handshake-size handling. Apple has been quieter publicly but is on similar timelines. Chrome's root program (the CA/Browser Forum's most influential voice in practice) has signaled they want hybrid certificate chains tested in pilot programs first.
Realistic timeline based on current CABF working group discussions:
| Window | Status |
|---|---|
| 2026 | Experimental PQ roots in test programs only; no production trust |
| 2027-2028 | First hybrid roots (classical + PQ signatures) added to root stores |
| 2029-2030 | PQ-only roots accepted; classical algorithms still trusted |
| 2032+ | Classical signature deprecation discussions begin in earnest |
Compare this to the 47-day certificate validity rule, which only became official in 2025 after years of CABF debate. Root program changes move slower than that. They have to: a bad root entry breaks the internet, and the people running these programs know it.
The practical implication: you don't need to plan for PQ certificates in 2026. You need to plan for the systems that will let you adopt them when CAs start issuing.
Crypto Agility Is the Actual Deliverable
Crypto agility means you can swap a cipher suite, key exchange, or signature algorithm in production without a code change, ideally without a deploy. It's the only durable answer to a multi-decade migration where the standards keep moving. Every engineering hour spent on "PQ readiness" that doesn't increase your agility is wasted, because the specific algorithm you're preparing for might not be the one you ship.
What this looks like in practice, based on building TLS inventory tooling across thousands of certificates:
- Cipher suite config as data, not code: nginx, Envoy, HAProxy configs in version control. No hardcoded suite lists in application binaries.
- Library version inventory: know which OpenSSL, BoringSSL, rustls, Go crypto, and Java JSSE versions are running where. A surprising number of teams can't answer this for their fleet.
- Algorithm telemetry: log negotiated KEX and signature algorithms per connection. If you can't tell me what percentage of your handshakes used X25519MLKEM768 last week, you're not ready for the next algorithm change either.
- Certificate inventory with algorithm tracking: know which certs use ECDSA vs RSA, P-256 vs P-384, key sizes, signature algorithms. The same systems you'd build for certificate monitoring at scale extend naturally to algorithm tracking.
- Pinning audit: any TLS pinning, HPKP-equivalent in mobile apps, or certificate pinning in CI/CD becomes a migration blocker. Find them now, not in 2029.
A team with all five of these capabilities can adopt ML-KEM in 2026, ML-DSA in 2028, and whatever replaces ML-DSA when a flaw is found in 2031, with the same playbook. A team without them is rebuilding from scratch each time.
What to Actually Do in 2026 (And What to Skip)
Do exactly three things in 2026: upgrade your TLS libraries to versions supporting hybrid KEX, build an algorithm inventory across your certificate fleet, and ask your CA for their public PQ roadmap in writing. Skip the rest. There is no "PQ migration program" worth running this year for 95% of teams.
The do list:
- Upgrade to OpenSSL 3.5+, BoringSSL current, or rustls 0.23+ wherever you terminate TLS yourself. CDN-fronted traffic is already covered.
- Inventory your certificates by algorithm, key size, and CA. If you can't query this across your fleet in one place, fix that first. This is the same operational problem as broader SSL certificate management at scale, and the solution is the same too.
- Test handshake size on constrained clients. If you ship to IoT, embedded, or anything on metered cellular, instrument for handshake byte counts now. PQ signatures will hurt you here first.
- Talk to your CA in writing. DigiCert, Sectigo, GlobalSign, and Let's Encrypt all have PQ roadmaps in various states. Get yours in your inbox.
- Audit hardcoded cipher suite lists in application code, mobile clients, and CI/CD configs. Move them to config files.
The skip list:
- Don't migrate internal services to experimental PQ signatures. They're not in any public root store. You'll create internal CA work for nothing.
- Don't buy a "PQ migration platform". The work is inventory and library upgrades. You don't need a SaaS for that.
- Don't write a PQ migration policy document longer than two pages. The standards aren't done. Anything you write in 2026 is partially wrong by 2028.
- Don't panic about HNDL for traffic that doesn't matter on a 10-year horizon.
The 2030-2035 Window: What Realistic Looks Like
Full post-quantum migration takes 5-10 years even with aggressive timelines, because TLS lives in places that don't get touched on quarterly release cycles. Industry data on hardware lifecycles tells the story: payment terminals last 7-10 years, industrial control systems run 20+, and medical devices have FDA recertification cycles measured in years per change. Embedded TLS stacks from 2018 are still receiving traffic today and will be in 2030.
History is the honest comparison. SHA-1 deprecation took roughly a decade, from the first practical attack discussions (2005-2010) to full removal from major browsers (2017). TLS 1.0 and 1.1 were formally deprecated in RFC 8996 in March 2021, after being known-weak for years, and they're still being negotiated in the wild today. PQ migration is a bigger lift than either, because it touches certificate sizes, CA infrastructure, and root programs simultaneously.
A realistic phasing:
| Window | Milestone |
|---|---|
| 2026-2028 | Hybrid KEX becomes default for browser and CDN traffic. Server stacks catch up. PQ signature pilots begin in non-public PKI. |
| 2028-2030 | First publicly-trusted PQ-hybrid certificates issued. Major CAs run dual-algorithm roots in test programs. |
| 2030-2033 | Classical-only certificates deprecated for new issuance. Existing certs honored to expiry. With the 47-day validity window in effect, deprecation cycles get noticeably faster. |
| 2033-2035 | Classical algorithms removed from major root stores. Long-tail clients (IoT, embedded, payment) still negotiating classical for years after. |
Anyone telling you we'll be "post-quantum by 2030" is selling something. Anyone telling you "you have years, don't worry" is also wrong, because the inventory and agility work has to happen now to be useful then. Post-quantum cryptography is the next decade of TLS work, and the teams who win it are the ones who treat it as an operational discipline, not a marketing event.
FAQ
When will I need a post-quantum TLS certificate? Not in 2026, and probably not in 2027. First publicly-trusted PQ certificates likely appear in pilot programs in 2027-2028, with general availability around 2029-2030. Internal PKI can adopt earlier if your organization's threat model justifies it.
Is X25519MLKEM768 the same as Kyber? Close. X25519MLKEM768 is a hybrid construction combining the classical X25519 elliptic curve with ML-KEM-768, the NIST-standardized version of CRYSTALS-Kyber. ML-KEM is the FIPS 203 final form; Kyber was the algorithm name during the NIST PQC competition.
Should I disable classical algorithms once PQ is available? No. Hybrid constructions exist precisely so you don't have to bet on one algorithm. Run hybrid for as long as both halves are secure. Classical-only deprecation is a 2030s problem, not a 2026 one.
Does TLS 1.3 post-quantum require protocol changes? No. The TLS 1.3 group exchange mechanism (named groups in the supported_groups extension) was designed for algorithm agility. New KEX algorithms get new code points and negotiate without changes to the handshake structure itself.
What's the actual handshake size impact? Hybrid KEX adds about 1KB per handshake (ML-KEM-768 public keys are ~1184 bytes, ciphertexts ~1088 bytes). PQ signatures will add 5-15KB depending on chain depth and algorithm choice. The KEX impact is invisible in practice; the signature impact is not.
If you're managing TLS at scale and want algorithm tracking baked into your certificate inventory rather than bolted on later, CertPulse monitors TLS certificates with algorithm-level visibility built in. The PQ transition will reward teams who already know what they have running where.
This is why we built CertPulse
CertPulse connects to your AWS, Azure, and GCP accounts, enumerates every certificate, monitors your external endpoints, and watches Certificate Transparency logs. One dashboard for every cert. Alerts when auto-renewal fails. Alerts when certs approach expiry. Alerts when someone issues a cert for your domain that you didn't request.
If you're looking for complete certificate visibility without maintaining scripts, we can get you there in about 5 minutes.