Recently I was implementing some of the TLS 1.3 handshake as part of the Information Security Lab at ETH Zurich.

When working on the lab I was googling around and by chance came across this OpenSSL man page. Specifically, its “NOTES” section at the bottom. You can read it on your own, but the TLDR is: under certain operating systems and TCP settings, and given a not-too-large amount of application data, 0-RTT may inadvertently end up being 1-RTT. In other words, you go to all the lengths to build a protocol that has low latency but wind up back on square one. (You can find the original issue here.)

For me this is a reminder of two things:

  1. Measuring performance matters (just like writing tests). Without measuring it, you wouldn’t have noticed that not all of your packets are doing 0-RTT.

  2. You can have a nice higher level protocol, but the lower part of the stack can cause unexpected effects. Abstraction works only so far. Thus even if you generally work higher up, you need to have a solid understanding of what is happening underneath.