~27 min read
Secure Randomness: From Zero to Verifiable Delay Functions, Part 2
We’d like to thank the following two people for helping with this blog post by answering our questions:

Benjamin Wesolowski, who laid a lot of the theoretical groundwork for verifiable delay functions.

Bram Cohen, CoFounder and CEO of Chia Network (and also known for being the author of the BitTorrent protocol). The Chia blockchain is the only major project to currently employ verifiable delay functions in practice.
Intro
This is the second post in a twopart series on different approaches to onchain randomness.
For many different purposes, the blockchain space needs a trustless, generalpurpose source of randomness. In the first post, we saw that almost all sources of general onchain randomness that are currently in use have fundamental flaws (the severity of which varies). So how can we fix this?
In this post, we focus on employing Verifiable Delay Functions (VDFs) for onchain randomness. We describe what VDFs are and why it’s not clear they even exist. We also discuss one of the most promising candidates for a VDF and examine what difficulties an onchain implementation of a scheme using that VDF candidate would face.
Finally, we also spend some time examining a VDF implementation that is already being used in practice, namely in the consensus algorithm of the Chia blockchain.
Be warned: This post is a technical deepdive into the theory and implementation of VDFs. It dissects concepts down to the nasty details that are inherent in cutting edge research. However, we are convinced it’s worth going all the way.
WTF is a VDF?
Let’s begin by defining what a Verifiable Delay Function (VDF) is.
Informally, a VDF is a conjectured cryptographic primitive that allows anyone to quickly verify that a socalled prover has spent a certain amount of time computing something from an input they were given.
Slightly more formally, a VDF is a function whose evaluation involves a large number of sequential steps, but whose result can be verified in a much smaller amount of steps.
Very formally, using a (slightly modified) definition from this paper, a VDF scheme for a given security parameter $λ$ and a delay parameter $t$ consists of an input domain and an output range as well as two algorithms $Eval$ and $Verify$. The algorithm $Eval$ is hard to compute; it takes an input $x$ from the input domain and produces an output $y$ from the output range along with a proof $π$. The algorithm $Verify$, on the other hand, is easy to compute; it takes $x,y,π$ and uses the proof $π$ to verify that $y$ is the output corresponding to $x$. It outputs ‘true’ if the verification is successful, and ‘false’ if not.
A VDF scheme must satisfy the following properties. It should be:

sequential (basically: $Eval$ is hard to compute): Anyone can compute $y=Eval(x)$ in $t$ sequentials steps, but not even an adversary that has a polynomial number of processors available can distinguish $y$ from an element picked randomly from the output range in significantly fewer steps. (This is a paranoid cryptographer’s way of saying “Nobody can compute the output of $Eval$ much faster than with the obvious approach”.)

efficiently verifiable (basically: $Verify$ is very easy to compute): $Verify$ runs significantly faster than $Eval$.

unique (basically: Fooling the verifier is computationally hard): For any input $x$, it is computationally hard to find values $y$ and $π$ such that $Verify(x,y,π)=’true’$ but $y=Eval(x)$.
A VDF scheme should maintain these properties even if an attacker is granted a polynomial amount of time for precomputation.
Now that’s quite the definition. What’s the main takeaway from this section? For a VDF, when we give someone an input, they will need some time to compute the corresponding function output — but we will be able to verify that it’s the correct output very quickly.
There are candidates for VDF schemes which have not been broken yet — we’ll see one of them in detail later — but much like it’s unknown whether strong public key cryptography exists in general, it’s not known if a VDF scheme satisfying all properties defined above exists. The difference is that there has been significant research into breaking the most common public key cryptography schemes, while VDFs are a relatively new field.
Supplementing Our CommitAndReveal Scheme With a VDF
Now, assuming a VDF exists, let’s see how we could employ it.
In the last post, we saw a commitandreveal scheme. It worked as follows: In phase one, all $k$ participants publish a commit of their fixedlength random binary input. In phase two, everyone reveals their input and the contract verifies that it matches their committed hash. The result is the XOR of all inputs.
We saw that this scheme is almost perfect and guarantees true randomness as long as at least one participant is honest and inputs truly random data.
Unfortunately, it had the fundamental flaw that participants can refuse to reveal their inputs and, by doing so, influence the result. We’re now in a position to fix this flaw by employing VDFs.
The CommitandReveal Scheme Augmented with a VDF
The scheme we construct has three phases. For simplicity, we denote the $Eval$ function as $F$.
(Note that the following scheme is an example we cooked up ourselves. It has not been subjected to due scrutiny, so be very careful when employing this in practice. We’ll see a different scheme that is currently being used in practice later.)

In the first phase, each of the $k$ participants chooses a random VDF starting value $r_{i}$ for themselves and computes $F(r_{i})$. They then take their random input $a_{i}$ and encrypt it using a key $K(F(r_{i}))$ derived from $F(r_{i})$, obtaining $Enc_{K(F(r_{i}))}(a_{i})$.

In the second phase, each participant $i$ commits $r_{i}$ and $Enc_{K(F(r_{i}))}(a_{i})$ to the onchain contract. These values are now public.

In the third phase, anyone can input $(i,F(r_{i}),π_{i})$ to the onchain contract. This essentially says “I know the VDF result for participant $i$: it’s $F(r_{i})$, and the proof for that is $π_{i}$“. The contract can then verify that $Verify(r_{i},F(r_{i}),π_{i})=’true’$ and if so, calculate $Dec_{K(F(r_{i}))}(Enc_{K(F(r_{i}))}(a_{i}))=a_{i}$.
As with the old commitandreveal scheme, the result is $r=a_{1}⊕…⊕a_{k}$.
Some notes on each of the phases:
Phase 1: The length of this phase can be extended at will, and indeed the time period must be long enough such that each participant can compute $F(r_{i})$ along with the corresponding proof $π_{i}$. Note, however, that the participants can also precompute these values.
Phase 2: As soon as participant $i$ commits their values $r_{i}$ and $Enc_{K(F(r_{i}))}(a_{i})$ onchain, anyone can start computing $F(r_{i})$ to then compute $Dec_{K(F(r_{i}))}(Enc_{K(F(r_{i}))}(a_{i}))=a_{i}$. Hence, the length of the second phase must be chosen such that this computation is guaranteed not to complete within this phase for any realistic adversary.
Phase 3: Ideally, participant $i$ submits their triple of values themselves. However, given the public value $r_{i}$, anyone can compute $F(r_{i})$ and $π_{i}$ and thus “force” the reveal of participant $i$‘s input after some time has passed. This way, participants refusing to reveal their input are no longer a problem.
Analyzing the Scheme
Assuming certain security assumptions (that we will discuss later in this post), it should be true that no matter what input $(i,Y,π_{i})$ anyone inputs for participant $i$ in phase three, the verifier ensures (up to negligible probability) that $Y=F(r_{i})$. Hence the result of the decryption is guaranteed to be $a_{i}$.
Note that the participant does not need to have committed the correct $r_{i}$ and the correct encryption of $a_{i}$. But if they do not, the decryption done by the contract will either fail — in which case the contract can simply ignore that participant — or it will decrypt to nonsense instead of $a_{i}$, which is fine.
Indeed, note that at the end of phase two, when all participants have committed their inputs, the output should be entirely deterministic. As long as there exists at least one person that bothers to reveal any unrevealed inputs, nobody has any control over the set of inputs that are used to compute the result. Hence, as we saw in the discussion for the commitandreveal scheme in the last post, if there exists some participant $i_{′}$ that inputs a truly random input that is independent of the other inputs, the output is truly random. Assuming that participant $i_{′}$ neither collaborates with anyone nor leaks their input or $r_{i}$ prematurely in any way, the independency is guaranteed by the fact that noone can compute $F(r_{i})$ and hence decrypt $Enc_{K(F(r_{i}))}(a_{i})$ until some time after phase two.
Other Schemes
It should be noted that there are other protocols based on VDFs. The one we described above is simply the natural extension of the commitandreveal scheme we saw in the last post. Later in this post, we’ll discuss a completely different architecture currently used by the Chia blockchain that needs significantly less VDF evaluations.
A Promising VDF Candidate: The Wesolowski Construction with Class Groups
Now that we’ve seen how to employ Verifiable Delay Functions to construct what seems like the perfect scheme for onchain randomness, let’s see a promising candidate for such a VDF. Keep in mind that most research into VDFs is very recent and hence this construction has likely not been subjected to the amount of scrutiny one might wish.
The General Construction for Groups of Unknown Order
The following is a construction proposed in a paper by Benjamin Wesolowski (see also this paper for a more concise overview of the construction).
The scheme uses a group $G$ of unknown finite order as both its input and output domain. We will see later why the fact that the group has an unknown order is important. The scheme is parameterized by the security parameter $λ$ and the delay parameter $t$.
The $Eval$ algorithm, which gets a group element $g$ as input, consists of simply computing $y=g_{2_{t}}$ via fast exponentiation and selecting a seeded random prime $ℓ$ from the first $2_{2λ}$ prime numbers. The randomness here should only depend on $g$ and $y$ (this makes the protocol noninteractive, which is necessary for our VDFbased randomness scheme — in fact, this is an example of applying the FiatShamir heuristic to the interactive version of the protocol). $Eval$ outputs $y$ and the proof $π$ defined as $π:=g_{⌊2_{t}/ℓ⌋}$.
The $Verify$ algorithm gets $g,y,π$ as input and must verify that $y=g_{2_{t}}$ quickly. To do this, we compute $r=2_{t}modℓ$ and accept if and only if $π_{ℓ}g_{r}=y$.
Note that the verifier obviously accepts outputs that the evaluator algorithm produces, since we can rewrite $⌊2_{t}/ℓ⌋=(2_{t}−r)/ℓ$ and hence $π_{ℓ}g_{r}=g_{(2_{t}−r)}g_{r}=g_{2_{t}}=y$. Further, the verify algorithm is easy to compute as long as group operations run in a reasonable time: We can compute $r$ via fast modular exponentiation with $O(g(t))$ multiplications in the ring $Z/ℓ$, after which the computation of $π_{ℓ}$ and $g_{r}$ are just two fast exponentiations that can be done in $O(g(ℓ))$ group operations. Usually, $g(ℓ)<<t$.
The above shows that this scheme satisfies the efficient verifiability property. For it to be a full VDF scheme, we would also have to show that it has the sequentiality and uniqueness property. However, both of these are only conjectured to hold.
For sequentiality (”$Eval$ is hard to compute”), it certainly seems like the fastest way to compute $y=g_{2_{t}}$ is via fast exponentiation. This is actually also where the assumption that the group has an unknown order comes into play: A wellknown and easy corollary to Lagrange’s Theorem says that for any element $g∈G$, we have $g_{∣G∣}=1$, where 1 is the multiplicative identity in $G$. Hence, $y=g_{2_{t}}=g_{2_{t}mod∣G∣}$, so we have found a faster way to compute $y$. In other words, if one were to use a group with a known order $∣G∣$ for the Wesolowski construction, anyone could compute $Eval$ relatively quickly, making the scheme useless.
More formally, one can define a timelock game whose hardness is sufficient for sequentiality to hold for an implementation in a certain group. For a more formal discussion of this, see section 5, Assumption 2 of Wesolowski’s original paper.
For uniqueness (“Fooling the verifier is computationally hard”), let’s assume that the attacker even has the value $y$ with which they want to fool the verifier, and they only need the corresponding $π$ to go with it. Let’s define the rootfinding game as a twoplayer game where an attacker is given a group of unknown order, chooses an element $u∈G$ and is then given a random prime $ℓ$ from the list of the first $2_{2λ}$ primes. They must then output a group element $v$ and win the game if and only if $v_{ℓ}=u=1$. In other words, the game tests whether the attacker can compute an random prime root of a group element they chose themselves.
Wesolowski gives a nice informal intuition in his paper for why fooling the verifier comes down to breaking the rootfinding game. To do this, he analyzes the interactive version of the protocol, where $ℓ$ is chosen at random by the verifier instead of being randomly generated depending on $g$ and $y$. We slightly adapt the argument here:
We want that the verifier accepts, meaning we want $π_{ℓ}g_{r}=y$. We have $r=2_{t}modℓ=2_{t}−ℓ⌊2_{t}/ℓ⌋$, hence we can rewrite this as $π_{ℓ}g_{2_{t}−ℓ⌊2_{t}/ℓ⌋}=y$ or alternatively $yg_{−2_{t}}=(πg_{−⌊2_{t}/ℓ⌋})_{ℓ}$. The verifier accepts if and only if that last equation holds. Looking at the last version of equation, we denote its lefthand side by $α=yg_{−2_{t}}$.
For a malicious prover to fool the verifier with a fake proof $π$, he must hence be able to compute a $π$ that fulfills the above equation, i.e. he must be able to compute $α_{1/ℓ}g_{⌊2_{t}/ℓ⌋}$. The hard part here is finding $ℓ$th roots of $α$. Since $ℓ$ is randomly chosen, this is an instance of the rootfinding problem.
Of course, the above is not a formal proof that fooling the verifier requires breaking the rootfinding game. The real proof first formally argues this fact for the interactive version of the protocol in the random oracle model, after which you can apply a general argument that applies for any interactive protocol made noninteractive by the FiatShamir heuristic. If you’re interested in the details of the formal proof, see section 6 of Wesolowski’s original paper.
The takeaway from this section is that employing this Wesolowski construction only makes sense if the timelock game and rootfinding game are conjectured to be hard in the group that we use as a basis for our scheme.
The Class Group of an Imaginary Quadratic Field
We’ve now seen a protocol that relies on a group of unknown order in which computing repeated squares must be done in a sequential fashion, and in which finding arbitrary roots is hard. Ideally, we’d also like this group to have fairly efficient group operations. So are there any good candidates for such a group?
Wesolowski, in his original paper, mentions two such types of groups: RSA groups (i.e., invertible integers modulo a product of two large primes under multiplication) and class groups of an imaginary quadratic field.
The first type is suited for our purposes only in a very limited way because we require that nobody can reconstruct the order of the group, even if many parties collaborate. This is hard to achieve, because the two primes defining the RSA group have to come from somewhere. Wesolowski mentions that there are approaches to generating large integers that are divisible by two unknown primes with large probability, but those have significant drawbacks as well. There are also ways to generate two unknown primes via secure multiparty computation, but that is very complicated to implement, let alone implement onchain in a verifiable manner. Hence, we will focus on the second type of group, the class group.
This is where some abstract algebra comes into play. Feel free to skip some of the more technical parts of the following paragraphs, they are not essential for the randomness protocol itself.
The class group we will use is constructed as follows. Take a negative prime $d$ and define the quadratic extension $K=Q(d )$. Then, take its ring of integers $O_{K}$ and consider its fractional ideals $J_{K}$ along with its fractional principal ideals $P_{K}$. Then, the class group we use is the quotient group $J_{K}/P_{K}$.
Now of course, that’s a very convoluted definition, and it’s certainly not something we’d want to represent computationally as it stands. Fortunately, this group is isomorphic to a group consisting of equivalence classes of binary quadratic forms, along with a wellknown and efficiently computable group operation. All quadratic forms across all equivalence classes of that group have the same discriminant, namely the negative prime $d$ we chose above.
In fact, the group is identified by its negative prime discriminant $d$. Let us construct the group corresponding to some given $d$. We define two binary quadratic forms $f(x,y)=ax_{2}+bxy+cy_{2},g(x,y)=αx_{2}+βxy+γy_{2}$ to be equivalent if $f(Z,Z)=g(Z,Z)$, i.e., if their ranges over integer inputs are the same. To construct the elements of the group corresponding to discriminant $d$, we then take the set of all binary quadratic forms with integer coefficients whose discriminant is $d$. We partition it into equivalence classes according to the equivalency we just defined.
The group operation on two group representative is now defined as follows. Let two representatives $f_{1}(x,y)=ax_{2}+bxy+cy_{2}$ and $f_{2}(x,y)=αx_{2}+βxy+γy_{2}$ of two equivalence classes be given. We write them as $f_{1}=(a,b,c)$ and $f_{2}=(α,β,γ)$. Then, the group operation (commonly known as form composition) is defined as follows:
 Set $g=21 (b+β),h=−21 (b−β),w=gcd(a,α,g)$.
 Set $j=w,s=wa ,t=wα ,u=wg $.
 Solve $(tu)k≡hu+scmodst$ for $k$, obtaining solutions $k=μ+νZ$. This is a standard problem and can be done efficiently (see e.g. section 7.4 of the Chia paper on class groups.)
 Solve $(tν)n≡h−tμmods$, obtaining solutions $n=λ+σZ$.
 Set $k=μ+νλ,l=skt−h ,m=sttuk−hu−cs $.
 Return $f_{3}=(st,ju−(kt+ls),kl−jm)$.
It is not hard to prove that if both inputs have the same discriminant, this operation preserves it. The proofs that this forms a group, or indeed that it is isomorphic to the fractional ideal definition mentioned above, are rather more involved. See e.g. this blog post for a concise introduction and argument.
In practice, there are several crucial methods to make group operations even faster. One method involves always operating on a specific group representative $f=(a,b,c)$ that is in reduced form, in this case meaning that $−a<b≤a$, and furthermore that either $a<c$ or that $a=c$ and $b≥0$. It turns out that each equivalence class only has one such reduced form, and that it is very easy to operate on it because its coefficients are reasonably small. Another method comes from the fact that the composition of forms with a prime discriminant is much simpler if both inputs are the same, i.e., if you’re squaring a form. Since this is the main operation in our VDF construction, it greatly reduces its running time.
This concludes the description of the class group. Crucially, for a randomly chosen negative prime discriminant $d$, the order of its corresponding class group is unknown. In fact, the fastest known algorithm for computing the order of that group, the HafnerMcCurley algorithm, runs in expected time $exp((8 3 +o(1))(g∣d∣)_{1/2}(gg∣d∣)_{1/2})$ (see paper here), where $8 3 ≈1.06$. That’s pretty slow! And this running time even assumes the generalized Riemann hypothesis. For comparison, the currently theoretically fastest known generalpurpose algorithm for factoring a number $N$ (one of the most researched problems in existence, since a fast algorithm would break RSA), the general number field sieve (GNFS), only takes time $exp((3964 +o(1))(gN)_{1/3}(ggN)_{2/3})$ (see Wikipedia), where $3964 ≈1.92$. Note the exponents.
All of this makes this type of class group the best currently known contender for a group to use in our VDF scheme.
Other VDF constructions
We’ve now seen the Wesolowski construction for VDFs, as well as a candidate for a group of unknown order with which it can be used. It should be mentioned that this is not the only viable construction for VDFs available: Pietrzak describes a similar VDF based on repeated squaring in a group of unknown order, differing from the Wesolowski construction by how verification is performed. Two more constructions are described by De Feo, Masson, Petit and Sanso, using supersingular isogenies and pairings. However, these latter constructions require a trusted setup, much like the instantiation of the Wesolowski protocol using an RSA group. Again, this is something we would ideally like to avoid.
It is also an open problem to find a simple VDF that is secure against attackers with access to a quantum computer. Both the Wesolowski and Pietrzak scheme can be broken by computing the order of the underlying group via Shor’s algorithm.
Considerations for Practical Implementation: Towards an OnChain Contract
We now have a promising candidate for a VDF. Great! Irrespective of any theoretical concerns over hardness assumptions, what’s stopping us from just running with this and implementing our scheme as an onchain contract?
Not much. In fact, the Chia blockchain already uses VDFs (notably as part of their consensus algorithm and not as an onchain contract). They use a variant of the Wesolowski protocol that splits the generation of $y=g_{2_{t}}$ into some number $1≤n≤64$ many parts, which makes computing the proof $π$ faster by parallelizing the computation of $Eval$, a variant that is described in section 4.3 of Wesolowski’s paper.
Chia’s protocol is very different from the protocol above with which we motivated VDFs. It does not directly take participant input, and consequently does not need one VDF execution per participant. Instead, they use three interacting VDF chains. When simplified, their method boils down to the following. They generate a continuous stream of challenges, each of which is meant to be fully random. To generate a new challenge $c$, they use $c_{′}$, the last VDF challenge that was generated, and the blockhash $h$ of the Chia block that was generated directly after the last VDF challenge was published. The new challenge $c$ is then generated by hashing F(c’) together with F(h). The idea is that by the time the attacker has computed F(h) and F(c’), they can’t influence the blockhash h anymore, so it’s hard for them to influence the challenge in any directed way.
Ideally, we want to implement either Chia’s or our protocol onchain. What obstacles are there to making that happen?
Generating Random Numbers
One of the main concerns regarding implementing the Wesolowski scheme is that it relies on large (pseudo)random^{1} primes: We need a random prime discriminant that defines our group $G$, and in every execution of the protocol, we need a seeded random prime $ℓ$ during verification. In the original paper, $ℓ$ comes from $Primes(2λ)$, which is the set of the first $2_{2λ}$ primes, where $λ$ is the security parameter. Chia itself uses 264bit primes as $ℓ$ (see code here).
Generating a prime takes a nonnegligible amount of computation time. The best method currently known — and indeed what almost all cryptographic applications essentially do to obtain random primes — is to generate and test the primality of many candidates.
Let’s take the num_bigint_dig
crate that implements purerust bigints along with cryptographic functions including random prime generation. For each candidate, it does a lot of cheap pretests like testing divisibility by small primes, followed by 100 rounds of MillerRabin, and finally the Lucas primality test. If a candidate fails at any point in these tests, the search immediately continues with the next candidate. If it passes all tests, the search is terminated and the candidate is output as the result.
In practice, using the release flag, this needs something like 100200ms singlecore for generating a 1024bit prime on an average laptop. That’s a lot!
The openssl prime generation (using openssl prime generate bits 1024
) does a bit better, completing in about 3060ms, but that’s still not great for our purposes. It also does heavy lowlevel optimization, which isn’t accounted for in most gas costs / computation limit models, which is important if we want to implement this onchain.
We did some very quickanddirty tests for how well prime generation using the num_bigint_dig
crate works on Solana, which currently has a transactionwide limit of 1.4 million compute units (which are configured to approximate the computation time needed to execute the onchain contract). The problem was that remainder operations — which are essential for any primality test — were extremely inefficient, with up to 11.000 compute units simply for computing the remainder of a 1024bit number when simply dividing by 5. This makes it necessary to split a full run of an onchain primality test over several hundred or even thousands transactions. And keep in mind that you typically need to check hundreds of candidates! Though most of those tests will usually abort much sooner.
Even though this can almost certainly be optimized significantly, it would be surprising to see the entire prime generation process complete in only a couple of transactions, or even a few hundred, in the near future.
Choice of Discriminant
Apart from prime generation, verification also involves calculating $π_{ℓ}$ and $g_{r}$ via fast exponentiation, hence requiring squaring and multiplying group elements. Both of these operations involve many costly BigInt operations. The size of these BigInts depends on the size of the discriminant.
So in practice, discriminants should be chosen small enough to make verification relatively quick and cheap, but large enough to make breaking the discriminant infeasible in practice. So what’s the smallest discriminant we can get away with? Research on this is very sparse, but there is one data point of interest.
One way an adversary could beat the timelock game and rootfinding game is by finding a significantly faster way to compute powers of the group element $g∈G$. The obvious way to do this, as we’ve already mentioned above, is to know the order $∣G∣$ of $G$ and then simply compute $g_{nmod∣G∣}$. In fact, just knowing the order of any group element $g=1$, i.e. to find a group element $g$ other than the identity and a number $p$ such that $g_{p}=1$, might already make the group unsafe, since p necessarily divides $∣G∣$.
So how long does it take to compute the order of a group element, i.e., to “break” its prime discriminant? Fortunately, we can again look to Chia: they ran a discriminant breaking contest as part of the first iteration of their VDF challenge. The winning entrant was able to break a 240bit discriminant in slightly less than 10 days on unspecified consumer hardware. However, note that all they did was run parigp
, which includes a fast implementation of the BuchmannMcCurley algorithm that computes an algebraic representation of the class group from which its order is easy to read. It might be possible to improve this significantly.
That being said, Bram Cohen pointed us to a paper by Belabas, Kleinjung, Sanso and Wesolowski that showed how to compute the order of a specific element in any group with a Mersenne prime as a discriminant, with some pointer for how it might be possible to extend this technique to other types of primes. This implies that such discriminants should never be used for a run of Wesolowski’s VDF.
He also confirmed that Chia currently uses 1024bit discriminants. This strikes a balance between ease of prime generation and safety of the protocol. Note that in their protocol, the discriminant is rotated every 10 minutes.
Faster Squaring (Code vs. Hardware)
Research into accelerating iterative squaring, i.e., accelerating the computation of $F(r_{i})$, is still sparse. Of course, any practical implementation will need to know an upper bound for this speed to choose the length of phase two. If there exists an implementation that is quick enough to compute $F(r_{i})$ for all $i$ before phase two ends, the last participant could completely control the output using the exploit method we discussed for the distributed XOR scheme from the first post.
Let’s start with what’s known for generalpurpose hardware. Fortunately, Chia ran a VDF competition in 2018 and 2019, asking for fast implementations of squaring in class groups. They used 2048bit discriminants. This resulted in a significant improvement in the computation time needed for squaring, from ~170 $μ$s per squaring of the reference implementation down to ~40 $μ$s per squaring, measured on a i58400 @ 2.8GHz. A second round of competition led to the development of further optimizations (note that the results are not directly comparable, since the hardware and parameters were different).
There are several optimizations that were employed in the winning entries, such as implementation of the NUDUPL algorithm (described in this paper) that reduced the size of operands during squaring operations, and a very neat trick discovered by Akashnil Dutta to speed up the reduce operation using that uses u64 approximations for transformation matrix entries, avoiding costly BigInt computations. Doubtless further optimizations are still possible.
A paper from 2020 even evaluates the speedup of using ASICs for squaring forms with 2048bit discriminants. They achieve ~6.3 $μ$s per squaring on TSMC 28nm CMOS technology, with a frequency of 500MHz. Their software implementation, tested on a 76850K @ 3.6GHz, requires roughly double the time per squaring, still clocking in at an impressive ~12.8 $μ$s.
Chia itself only uses 1024bit discriminants for its VDF implementation. The fastest implementation they have observed on their network runs at 360k iterations per second, implying about ~2.7 $μ$s per iteration.
In any case, the length of phase two should always be chosen such that there is an ample buffer for implementations faster than any implementation known publicly — especially since research into ASIC implementations is still so sparse. The faster the implementations, the larger the parameter $t$ will have to be to keep the length of phase two constant. As a consequence, it might become necessary for all participants to own and run specialpurpose hardware to keep up with the current speed requirements, potentially making participation costly.
Incentivization for Participants
One matter that should be mentioned is what the incentives should be for participants to submit inputs. If it becomes necessary to employ specialpurpose hardware — which is definitely plausible seeing as we’ve already seen a 2x speedup of ASIC vs software in the paper mentioned above, a value that will presumably increase as more research in this direction is initiated — incentives would need to offset the costs of purchasing and running that hardware.
In the end, VDFbased solutions might be restricted in use to applications with a large amount of money at stake with each generated random string. Applications with smaller amounts at stake might not warrant use of VDFs, relying instead on the normal commitandreveal scheme, VRFbased schemes or stakingbased schemes, and then simply trusting that all participants reveal their input.
Conclusion
In this post, we’ve seen what we believe is the only known solution for a generalpurpose, trustless scheme for generating randomness onchain. However, we’ve also seen that it relies on cryptographic assumptions that have received comparatively little attention — specifically, the hardness of the rootfinding game in the class groups we explored.
We also experimented very quickly with what would need to happen to implement a VDF in an onchain contract, and recognized that generating random primes onchain is a notable obstacle. In that regard, we also saw that nonetheless, the Chia blockchain already uses this construction in their consensus algorithm.
Generally available onchain randomness remains a major problem in the blockchain space. More research into the theory of VDFs, fast implementations of form squaring and verification algorithms, as well as possible alternative protocols, is desperately needed.
This concludes our twopart deepdive into onchain randomness. Make sure to check our blog regularly, as we will be releasing more deepdive posts about Solana bugs, topics related to smart contract audits, and interesting ideas in the blockchain space.
Footnotes

For brevity and readability, I’ll use random and pseudorandom interchangeably in this post. ↩