With all the talk recently of how the NIST curve parameters were selected, a reasonable observer could wonder why we all use the same curves instead of generating them along with keys, like we do for Diffie-Hellman parameters. (You might have memories of waiting around for openssl dhparam to run and then configuring the result in a web server for TLS.)

Thing is, user-generated parameters (such as custom elliptic curves) are not safe, and have no significant benefits. This is one of the lessons learned of modern cryptography engineering, and it contradicts conventional wisdom from the ‘90s.

Generating parameters is supposed to help with two things: first, it solves the question of how to pick parameters we can all agree on; second, there’s the idea that if we’re all using different parameters we are not putting all our eggs in the same basket and there isn’t a juicy precomputation target for attackers.

Picking trustworthy standard parameters is not prohibitively hard[1], and most importantly it is a job for the relatively few people whose job is specifying cryptography, instead of falling on the many many more who use it. Given the opportunity to make some people do a lot of extra work to save a lot of people some work, we should always take it.

Not putting all our eggs in one basket is a consideration that might have made sense in a thankfully gone-by era of cryptography when primitives were somewhat regularly weakened and broken.[2] Back then it might have been reassuring that yeah, an attacker might be able to break one key, but maybe they won’t get to break them all, and hopefully the damage will be limited. Today, we consider it completely unacceptable for even a single key to fall to cryptanalysis (as opposed to implementation error or side channel analysis), and we design systems accordingly. For example, device manufacturers embed the same public key in all their devices, and every mailbox user is protected by the same certificate (and really by the same root certificate authority keys), and so on.[3]

Even more generally, it’s really not of any consolation to hear that not everyone’s key is broken if your key is broken. Especially when whose key gets broken depends only on who the attacker concentrates their resources on, rather than on random chance.

The last time I can remember when custom parameters helped in practice was in 2015, for the Logjam attack. The researchers pointed out that a nation-state attacker could do a large pre-computation to target some very popular 1024-bit Diffie-Hellman parameters. However, the better take away was that 1024-bit Diffie-Hellman was just too weak to be used at all.[4] Also, as we will see later, the custom parameters negotiation introduced complexity that led to the worst parts of the attack.

In modern times, if a scheme is so close to the brink of failure that you need to edge by saying that not all keys will fall at once, we just call that broken[5][6]. It could be a corollary of Kerckhoff's Principle, which says that a cryptosystem should be secure even if everything about the system, except the key, is public knowledge:

A cryptosystem should be secure even if all the parameters, except the key, are shared across every user.

Ok, so generating parameters doesn’t help much, but isn’t it better than nothing? No, custom parameters are much worse than nothing.

First, it’s usually a very slow process: openssl dhparam 2048 takes more than 17 seconds on my M2 machine, and the docs of dsa.GenerateParameters say

This function can take many seconds, even on fast machines.

This means it can’t be done on the fly, but needs to be a separate operation handled and configured by the system administrator.

Second, and most importantly, verifying the validity of parameters is even harder than generating them. For example, picking a random prime is way easier than adversarially checking if a given number is prime. This adds a tremendous amount of complexity to the security-critical, attacker-reachable hot path. Any degree of freedom given to the attacker is an opportunity to build a better attack, any required runtime check is an opportunity for an implementation bug.

There are whole classes of attacks that are just impossible given fixed parameters, such as the 2020 Windows vulnerability that allowed complete TLS MitM and X.509 spoofing by exploiting custom curves[7]. The beauty of that attack is that the parameters weren’t even invalid, but simply controlling the parameters allowed the attacker to fake signatures. On the lower end of the severity spectrum, there’s been a string of DoS vulnerabilities because uncaught parameter edge cases could break expectations of surrounding code and cause crashes or extremely slow operations.

This is ultimately a big part of what made DSA much less popular and safe than RSA and ECDSA. ECDSA is not the best signature algorithm, by far, but at least it (usually!) doesn’t require generating and validating parameters.

Moreover, when doing negotiation in a protocol, it’s much simpler (and hence safer) to pick between curves A, B, or C or groups 1, 2, or 3 than it is to pick arbitrary parameters. For the former there’s the tried and proven method of having the client advertise support and the server pick. It’s not foolproof and can lead to downgrades without a transcript, but (unfortunately but sometimes unavoidably) most protocols already do many dimensions of parameter negotiation like that. For arbitrary parameters the client expresses some complex or incomplete preferences (if you are lucky), the server produces the parameters, and the client has to check they are valid and compliant with the preferences.

For example, the worst part of the Logjam Attack was a downgrade where a MitM convinced the server to pick and sign weak Diffie-Hellman parameters (by requesting “export” cipher suites, even if the client didn’t support them), and then broke them and retroactively fixed the transcript. Had the DH groups been fixed and standardized, the client would have just rejected the unsupported groups injected by the MitM, but instead here the client had to just say “huh, I guess the server really likes these weak parameters, at this point I either go along with it or break the connection”. This hints at an even deeper issue in how DH parameters are negotiated in TLS 1.0–1.2, which is part of why finite field DH is being deprecated in favor of elliptic curve DH[8]: there is no way for the client to express any opinions on the group selection, it can only accept the server’s choice or disconnect, too late in the handshake to select an alternative key exchange. This is also a direct consequence of the lack of standardized groups: with standardized groups the client could have listed the ones it supports, and the server could have refrained from picking DH if there was no acceptable overlap, like ECDH curves always worked. None of these are really intrinsic flaws of the finite field Diffie-Hellman primitive: DH is somewhat less efficient than ECDH, but otherwise perfectly serviceable. The issue is that DH was traditionally specified with custom parameters (groups) while ECDH was almost always specified with standardized curves, so the former ended up much less safe than the latter.

Finally, always operating over the same parameters allows implementers to target and optimize code, using tools like fiat-crypto to generate arithmetic code specifically for operations modulo a fixed prime, instead of having to resort to generic big integer libraries, which are necessarily slower and often more complex and not constant time.[9] Fixed fields let us optimize memory allocations, multiplication chains for inversions, low-level carry arithmetic, and so on. An optimized P-256 curve implementation will always be faster than a generic Weierstrass curve implementation, and often safer, too.

In conclusion, user generated parameters are a legacy design that proved to be much more trouble than it's worth, and modern cryptography is better off with fixed parameter sets.

If you got this far, you might want to follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @filippo@abyssdomain.expert.

The picture

Il Ponte Rotto, the Broken Bridge of Rome, seen from Tiber Island. This easily overlooked structure in the middle of the river, hidden by vegetation, is all that's left of what was two thousand years ago the longest and most important bridge over the Tiber. It was destroyed many times over, to the point that there's legends about it being cursed (article in Italian, but well worth a read, Google Translate does a good job). It hosted at times an aqueduct, a chapel, and even a hanging garden. One of my favorite spots.

A single arch of a stone bridge is lit in the foreground, pictured from the side and below. The stone is greyed by rain and sediment, there's vegatation hiding the sides. The water passes under and around it. The night sky behind it is dark blue.

My awesome clients—Sigsum, Protocol Labs, Latacora, Interchain, Smallstep, Ava Labs, and Tailscale—are funding all my work for the community and through our retainer contracts they get face time and unlimited access to advice on Go and cryptography.

Here are a few words from some of them!

Latacora — Latacora bootstraps security practices for startups. Instead of wasting your time trying to hire a security person who is good at everything from Android security to AWS IAM strategies to SOC2 and apparently has the time to answer all your security questionnaires plus never gets sick or takes a day off, you hire us. We provide a crack team of professionals prepped with processes and power tools, coupling individual security capabilities with strategic program management and tactical project management.

Ava Labs — We at Ava Labs, maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team.


  1. Some cryptographers have strong opinions on how to do this: hash a string, nothing-up-my-sleeves numbers, or “rigidly” define requirements and then pick the simplest/lowest value that satisfies them. I think they are all good enough to take away enough freedom from a would-be attacker, such that if the attacker can still select a weak set, they know so much more about the scheme that it might as well be wholly broken. I also find the arguments for why one method is particularly better than the other are weak and any arguments that a method (be it using the digits of Pi or design rigidity) is ironclad seem to ignore the fact that reasonable people will disagree on what is simple, or obvious, or better, or elegant (be it the requirements, or the choice of natural constant and how to encode/use it). It’s worth reflecting on how this is mostly a problem because the NSA poisoned the trust they need to engage in the processes we do need them to engage in. ↩︎

  2. The same era and considerations gave us cryptographic agility, another conventional wisdom from the ‘90s that does more harm than good today. ↩︎

  3. It’s kinda fun to think about what the most precious individual cryptographic key might be. A WebPKI root authority? The Apple firmware signing key? Surely not the DNSSEC root keys. ↩︎

  4. Another way to think about it is that there are in the order of single-digit billions of TLS certificates in the WebPKI, about 2³². This means that if there was an attack that can break one set of parameters, and every certificate used different parameters, the attacker could attack the whole WebPKI by running that attack 2³² times. In that sense, using different parameters is “buying” us only 32 bits of security against full-system compromise (although again, full-system compromise is not the only metric). We generally don’t consider 32 bits a comfortable security margin. Even the Bitcoin network can only perform approximately 2⁹⁴ operations a year, 34 bits less than the conventional 128 bit security level, and that’s a distributed attack that draws a globally noticeable amount of electricity with an extremely simple unit of operation (a single hash). ↩︎

  5. Theoretical cryptographers have an even stricter definition, they call a scheme broken if an attack exists that takes less than brute force, regardless of whether it is practical. I call something broken if it can be attacked in less than ~2¹⁰⁰ parallelizable work with practical amounts of memory. ↩︎

  6. A related shift in wisdom, due to the same dynamic where primitives are not regularly weakened, is the waning relevance of security levels. A scheme is either secure (> 128 bits of security) or not. It’s physically impossible to perform any operation 2¹²⁸ times on Earth, so that’s all you need to rule out brute force attacks. No one targets “higher” security levels if not for compliance or marketing reasons. (There’s an asterisk about multi-user settings and birthday attacks, but really those are special scenarios where you need keys bigger than 128 bits to target a 128 bit security level.) ↩︎

  7. In Go we don’t even parse custom curves in X.509 because oh god why would we, and yet we had to mitigate this because it was possible to target the system verifier through a Go application. ↩︎

  8. TLS 1.3 implements finite-field Diffie-Hellman with standardized groups, but it’s basically unused, since if you’re implementing TLS 1.3 you’re certainly implementing ECDH, too, which is more efficient, so there’s no incentive to add FFDH, even if now it’s properly specified. ↩︎

  9. We still need operations modulo arbitrary primes for RSA, sadly. Thankfully we can use the Chinese Remainder Theorem to operate modulo odd primes only, which helps a bit. Speaking of RSA, there isn’t really a concept of parameters, but I long believed we should have all agreed to hardcode the e value, and avoided a few DoS attacks, as well as all BB’06 attacks. Hardcoding e saved the signature verification code I wrote a decade ago for youtube-dl, actually. ↩︎