Most teams upgrade to TLS 1.3 the same way they upgrade Postgres minor versions: change the version string, restart, move on. That's fine for Postgres. It's a bad way to land TLS 1.3 nginx, HAProxy, or Envoy configs in production, because the version negotiation happens per-handshake. Your old TLS 1.2 cipher list is still alive and still serving CBC suites to whoever asks. This guide walks the actual config files, the verification steps, and the rollout discipline for the three proxies most platform teams run.
Why TLS 1.3 Isn't Just a Version Bump
TLS 1.3 is a security cleanup, not a performance upgrade. It cuts the handshake from 2-RTT to 1-RTT and removes the primitives behind most TLS CVEs of the last decade: RSA key exchange, CBC mode, RC4, SHA-1, and TLS-level compression. The performance win is real but small. The security cleanup is the actual reason to deploy it.
Performance numbers worth citing:
- Cold connection: According to Cloudflare's published edge data, TLS 1.3 handshakes are roughly 25% faster than TLS 1.2.
- Warm HTTP/2 with session resumption: 10–15ms median improvement at most.
- Director-friendly latency pitch: the numbers will disappoint.
Here's the part most teams miss: enabling TLS 1.3 does not disable TLS 1.2 or its cipher list. The protocols negotiate independently. If your nginx still has ssl_ciphers HIGH:!aNULL:!MD5; from 2017, every client that prefers 1.2 still gets the legacy suite list. TLS 1.2 and TLS 1.3 negotiate cipher suites in completely separate config directives — and most teams only touch one of them.
Frame this rollout as a security cleanup. The handshake reduction is a side effect.
Pre-flight: What to Audit Before You Touch a Config File
Inventory three things across your fleet before changing any config:
| Item | Minimum | Preferred |
|---|---|---|
| OpenSSL/BoringSSL | 1.1.1 | 3.0+ |
| nginx | 1.13+ | latest stable |
| HAProxy | 2.0+ | latest stable |
| Envoy | any from last 3 years | latest stable |
Check OpenSSL on every host with openssl version -a. Anything reporting 1.0.x will not speak TLS 1.3 regardless of what your nginx config says. In my experience auditing long-lived Debian Buster fleets, roughly 8% still ship 1.1.1 patched but not upgraded — check before you assume.
Client compatibility floors for TLS 1.3:
- iOS 12.2+
- Android 10+
- Java 11+
- Go 1.12+
- OpenSSL 1.1.1+
Anything older falls back to 1.2. Pull a week of access logs and bucket by $ssl_protocol (nginx) or equivalent. If 5%+ of traffic still negotiates TLS 1.2, you cannot drop 1.2 yet.
To grep a fleet for legacy directives before the change window:
ansible all -m shell -a \
"grep -rEn 'ssl_protocols|ssl_ciphers' /etc/nginx/" \
--become
Run it before the change window, not after the rollback. The output will surface vhosts you forgot existed.
TLS 1.3 nginx: The Minimum-Viable Production Config
The non-obvious detail of any TLS 1.3 nginx config: ssl_ciphers only controls the 1.2 cipher list. TLS 1.3 cipher suites negotiate separately and require ssl_conf_command Ciphersuites. Set both, or you'll have one path hardened and the other running defaults.
Here's the working block we deploy:
ssl_protocols TLSv1.2 TLSv1.3;
# TLS 1.2 fallback cipher list (Mozilla intermediate, minus weak suites)
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
# TLS 1.3 cipher suites (negotiated independently)
ssl_conf_command Ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256;
ssl_prefer_server_ciphers off; # client preference is fine in 1.3
ssl_session_tickets on;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 8.8.8.8 valid=300s;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
ssl_prefer_server_ciphers off looks wrong but isn't. In TLS 1.3, the client's preference order is reasonable and the server's preference matters less. The directive still applies to 1.2, where modern clients pick sensible suites anyway.
Don't drop TLS 1.2 yet in 2026. Java 8 jobs, older Python requests installs, and corporate MITM gear will still hit you. OCSP stapling matters more under shorter cert lifetimes; if it's silently broken you'll feel it. We covered that in detail in why OCSP stapling is probably broken on half your endpoints.
HAProxy: Per-Bind Config and the ssl-default-bind Trap
TLS 1.3 HAProxy configuration lives in two places: the global ssl-default-bind-* block and any per-bind override on a frontend line. The trap: per-bind options replace globals entirely instead of merging. A frontend with its own ciphers argument silently downgrades that listener.
Global block:
global
ssl-default-bind-options ssl-min-ver TLSv1.2 ssl-max-ver TLSv1.3 no-tls-tickets
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
Directive cheat-sheet (the most common HAProxy TLS 1.3 mistake is mixing these up):
| Directive | Controls |
|---|---|
ssl-default-bind-ciphers |
TLS 1.2 cipher list |
ssl-default-bind-ciphersuites |
TLS 1.3 cipher suites |
ssl-default-bind-options |
min/max version, ticket behavior |
Frontend bind with ALPN for HTTP/2:
frontend https-in
bind :443 ssl crt /etc/haproxy/certs/ alpn h2,http/1.1
http-request set-header X-Forwarded-Proto https
Now the trap. If you add ciphers EECDH+AESGCM to that bind line, you have just blown away the global ciphersuites for that listener only. The CIS benchmark scan next quarter will fail one frontend and not the others, and you'll spend an afternoon diffing configs. In my experience, this exact incident has surfaced at three separate customers in 18 months. Set globals once, don't override per-bind unless you have a written reason.
Envoy: TLS Context, ALPN, and the YAML That Actually Validates
TLS 1.3 Envoy configuration sits in a DownstreamTlsContext under your listener filter chain. The required fields:
tls_minimum_protocol_versiontls_maximum_protocol_versionalpn_protocols- either inline
tls_certificatesor an SDS reference
Envoy's docs scatter these across xDS reference pages; here's the working YAML in one place.
filter_chains:
- transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_params:
tls_minimum_protocol_version: TLSv1_2
tls_maximum_protocol_version: TLSv1_3
cipher_suites:
- ECDHE-ECDSA-AES128-GCM-SHA256
- ECDHE-RSA-AES128-GCM-SHA256
- ECDHE-ECDSA-CHACHA20-POLY1305
- ECDHE-RSA-CHACHA20-POLY1305
alpn_protocols:
- h2
- http/1.1
tls_certificate_sds_secret_configs:
- name: server_cert
sds_config:
path_config_source:
path: /etc/envoy/sds/server_cert.yaml
cipher_suites is the TLS 1.2 list only. Envoy's BoringSSL build picks the TLS 1.3 suites itself; you cannot override them. If you paste TLS 1.3 names into cipher_suites the listener will fail to bootstrap with Failed to initialize cipher suites.
SDS for cert rotation matters more than people realize. Without it, every cert renewal needs a hot restart or full reload. Pair it with a renewal pipeline that writes to the SDS path atomically, or you'll hit the silent failure where renewal succeeds but deployment doesn't.
0-RTT (Early Data): The One Feature You Probably Shouldn't Turn On
Leave 0-RTT off by default. It lets a returning client send application data in its first flight, saving one round-trip — roughly 50ms typical. The catch is replay: early data is not bound to the new connection, so a network attacker can capture and replay it. RFC 8446 Section 8 is explicit that the application must handle replay safety.
The "GET requests are idempotent so 0-RTT is safe" argument breaks the moment a GET hits a handler that increments a counter, logs a side effect, or returns cached data based on auth headers. In my experience, most internal APIs fail this test even when the engineer who wrote them swears otherwise.
Where 0-RTT early data is actually safe:
- Static asset edge caches with no per-user logic
- Anonymous read-only public APIs
- CDN paths that explicitly anti-replay at the application layer
That's it. The latency win is small enough that we leave it off by default and only enable it deliberately, per-route, with the Early-Data: 1 request header handling spelled out in the handler.
nginx supports this with ssl_early_data on; and proxy_set_header Early-Data $ssl_early_data; so backends can refuse to act on it. If your backend can't be patched to inspect that header, don't enable the feature.
Verifying You Actually Got TLS 1.3 (And Only the Suites You Want)
A successful reload proves the config parsed, not that it applied as intended. To verify a TLS 1.3 production configuration, handshake against the endpoint with specific constraints and read the actual negotiated values.
Four commands that cover the cases worth checking:
- Force 1.3:
openssl s_client -connect host:443 -tls1_3 -servername host— prints the negotiated suite. If this fails, 1.3 isn't enabled. - Confirm a specific suite:
openssl s_client -connect host:443 -ciphersuites 'TLS_AES_128_GCM_SHA256' -tls1_3 -servername host— run it for each you expect, and once for one you don't. - Enumerate everything offered:
nmap --script ssl-enum-ciphers -p 443 host— look for any line under TLSv1.0 or TLSv1.1 (should be empty) and any CBC suite under TLSv1.2. - Full audit:
testssl.sh --severity HIGH host:443— slowest but the most thorough. Save the output as your post-deploy artifact.
The pitfall: SNI-based vhosts. Without -servername, openssl talks to whatever the default vhost serves, which may not be the cert you're testing. In my experience, engineers approve rollouts against the wrong vhost more often than they admit — we've watched it happen three times. Always pass the hostname explicitly. For deeper verification patterns we've written about, see how to verify TLS config like an SRE.
Rolling It Out Without Paging Yourself
Treat the TLS upgrade like a deploy: canary one region first, monitor a specific log signal, and have the rollback diff committed to git before you apply forward. The signal that matters is the rate of handshake failures and the share of clients still negotiating TLS 1.2.
In nginx, log $ssl_protocol and $ssl_cipher in your access format. Then bucket the last 24 hours:
awk '{print $NF}' access.log | sort | uniq -c | sort -rn
After we enabled TLS 1.3 across 12 edge POPs, the 1.2 negotiation rate plateaued around 2.3%. Breakdown of that residual 2.3%:
- Roughly half: Java 8 batch jobs
- Remainder: long tail of corporate MITM proxies running old Bluecoat or Zscaler firmware
Neither shows up in synthetic monitoring; only access logs surface them.
Canary plan: enable on one POP for 24 hours, diff handshake failure rate against the rest, then expand. The client population most likely to break is the one your APM can't see, which is exactly why you need the log signal. We covered the broader pattern for detecting silent TLS breakage in SSL monitoring for production infrastructure.
FAQ
Can I disable TLS 1.2 entirely in 2026? Only if your access logs show less than 0.5% of traffic still negotiating it, and you're willing to lose that slice. Most public-facing services aren't there yet; internal service-mesh traffic usually is.
Does TLS 1.3 require new certificates? No. Cert and key formats are unchanged. ECDSA performs better than RSA in the 1.3 handshake, but RSA certs work fine.
Why does nginx use ssl_conf_command instead of ssl_ciphers for 1.3? Because the OpenSSL API exposes 1.3 ciphersuites through a separate function, and nginx's ssl_ciphers directive predates that split. The newer ssl_conf_command Ciphersuites is the bridge.
Is session resumption different in TLS 1.3? Yes. PSK-based resumption replaces session IDs and session tickets work differently. ssl_session_tickets still applies; the wire format changed underneath.
What about HTTP/3 and QUIC? QUIC uses TLS 1.3 internally as its cryptographic handshake. Configuring TLS 1.3 at your proxy is a prerequisite for HTTP/3 support, not a substitute for it.
Wrapping Up
A working TLS 1.3 nginx, HAProxy, or Envoy config is short — usually under ten directives. The trap is everything around it: the unaudited TLS 1.2 cipher list still serving CBC, the per-bind override that re-weakens one listener, the 0-RTT toggle someone flipped on for the latency win. Audit before you change, verify after, and watch the access logs for the clients your synthetic monitoring can't see. The version bump is the easy part.
This is why we built CertPulse
CertPulse connects to your AWS, Azure, and GCP accounts, enumerates every certificate, monitors your external endpoints, and watches Certificate Transparency logs. One dashboard for every cert. Alerts when auto-renewal fails. Alerts when certs approach expiry. Alerts when someone issues a cert for your domain that you didn't request.
If you're looking for complete certificate visibility without maintaining scripts, we can get you there in about 5 minutes.