Monday, November 28, 2016

Verifiable Random Functions

Pseudorandom functions (PRFs) are a central concept in modern cryptography. A PRF is a deterministic keyed primitive guaranteeing that a computationally bounded adversary having access to PRF's outputs at chosen points, cannot distinguish between the PRF and a truly random function mapping between the same domain and range as the PRF. The pseudorandomness property in the well-known candidates follows from various computational hardness assumptions. The first number-theoretical pseudorandom functions (PRF), has been proposed in the seminal work of Goldreich, Goldwasser and Micali1. Since then, PRFs found applications in the construction of both symmetric and public-key primitives. Following the beginning of their investigation, various number-theoretical constructions targeted efficiency or enhancing the security guarantees. Recent developments of PRF s include works on key-homomorphic PRFs or functional PRFs and their variants.


A related, and more powerful concept, is the notion of verifiable random functions (VRFs). They were proposed in 1999 by Micali, Rabin and Vadhan2. VRFs are in some sense comparable to their simpler counterparts (PRFs), but in addition to the output values, a VRF also produces a publicly verifiable proof $\pi$ (therefore, there is also need for a public verification key). The purpose of the proofs $\pi$ is to efficiently validate the correctness of the computed outputs. The pseudorandomness property must hold, exactly as in the case of a PRF, with the noticeable difference that no proof will be released for the challenge input during the security experiment. Since the introduction of VRFs, constructions achieving adaptive security, exponentially large input spaces or security under standard assumptions were introduced. However, the construction of VRFs meeting all aforementioned constraints at the same time has been proven a challenging academic exercise. Finally, progress in this direction has been made due to the work of Hofheinz and Jager3, who solved the open problem via a construction meeting all the requirements. A major constraint in achieving adaptive security under a static assumption resided in the lack of techniques for removing the "q-type assumptions" (the "size" of the assumptions is parameterized by "q" rather then being fixed) from the security proofs of the previous constructions.


An adaptive-secure VRF from standard assumptions

The scheme by Hofheinz and Jager has its roots in the VRF4 proposed by Lysyanskaya. In Lysyanskaya's construction, for an input point $x$ in the domain of the VRF, represented in binary as $x = (x_1,\dots, x_n)$, the corresponding output is set to the following encoding: $y = g^{\prod_{i=1}^{n} a_{i, x_i}}$, which for brevity we will denote $[\prod_{i=1}^{n} a_{i, x_i}]$. The pseudorandomness proof requires a q-type assumption. To remove it, the technique proposed in the Hofheinz and Jager paper replaces the set of scalar exponents $\{a_{1,0}, \dots, a_{n,1}\}$ with corresponding matrix exponents. A pairing is also needed for verifiability. Therefore, a point $x = (x_1,\dots, x_n)$ in the domain of the VRF will be mapped to a vector of points. Informally, the construction samples $\vec{u} \leftarrow \mathbb{Z}_p^{k}$ (p prime) and a set of $2n$ square matrices over $\mathbb{Z}_p^{k \times k}$: \( \begin{equation} \begin{aligned} \left \{ \begin{array}{cccc} {M_{1,0}} & M_{2,0} & \dots & M_{n,0}\\ {M_{1,1}} & M_{2,1} & \dots & M_{n,1}\\ \end{array} \right \} \end{aligned} \end{equation} \). The secret key is set to the plain values of the $\{ \vec{u}, M_{1,0}, \dots, M_{n,1} \}$ while the verification key will consists of the encodings (element-wise) of the entries forming the secret key. To evaluate at point $x$, one computes: $VRF(sk, x = (x_1,\dots, x_n)) = \Bigg[ \vec{u}^t \cdot \Big(\prod_{i=1}^{n}M_{i,x_i} \Big) \Bigg]$. The complete construction requires an extra step, that post-processes the output generated via the chained matrix multiplications with a randomness extractor. We omit this detail. A vital observation is that the multi-dimensional form of the secret key allows to discard the q-type assumptions, and replace it with a static one.




Proof intuition

The intuition for the proof can be summarized as follows:

  • during the adaptive pseudorandomness game, a property called "well-distributed outputs" ensures that all the evaluation queries except the one for the challenge will output encoded vectors $[\vec{v} = \vec{u}^t \cdot (\prod_{i=1}^{n}M_{i,x_i})]$, such that each vector but the one corresponding to the challenge belongs to a special designated rowspace. This is depicted in the figure, where the right side presents the evaluation of the challenge input $x^*$, while the left side presents the evaluation at $x\ne x^*$.
  • to enforce well distributed outputs, the matrices $M_{i,x_i}$ must have special forms; for simplicity, consider $x^* = (0, 1, \dots, 0)$ of Hamming weight 1 and the corresponding secret key: \begin{equation} \begin{aligned} \vec{u}^t , \left \{ \begin{array}{cccc} U_{1,0} & L_{2,0} & \dots & U_{n,0} \\ L_{1,1} & U_{2,1} & \dots & L_{n,1} \\ \end{array} \right \} \end{aligned} \end{equation} where $L_i$ stands for an $n$-$1$ rank matrix (lower rank), while the $U_i$ denotes a full rank matrix that map between RowSpace($L_{i-1}$) and RowSpace($L_{i}$). Rowspace($L_0$) will be a randomly chosen subspace of dimension $n-1$, and $\vec{u} \not \in $RowSpace($L_0$) with overwhelming probability. Also, notice the full rank matrices occur in the positions corresponding to $x^*$, in order to ensure well-distributed outputs.
  • finally, and maybe most importantly, one must take into account that the distribution of matrices used to ensure well-distributed outputs must be indistinguishable from the distribution of uniformly sampled square matrices. A hybrid argument is required for this proof with the transition between the games being based on the $n$-Rank assumption (from the Matrix-DDH family of assumptions).

References

1. Goldreich, O., Goldwasser, S., & Micali, S. (1986). How to construct random functions. Journal of the ACM (JACM), 33(4), 792-807.

2. Micali, S., Rabin, M., & Vadhan, S. (1999). Verifiable random functions. In Foundations of Computer Science, 1999. 40th Annual Symposium on (pp. 120-130). IEEE.

3. Hofheinz, D., & Jager, T. (2016, January). Verifiable random functions from standard assumptions. In Theory of Cryptography Conference (pp. 336-362). Springer Berlin Heidelberg.

4. Lysyanskaya, A. (2002, August). Unique signatures and verifiable random functions from the DH-DDH separation. In Annual International Cryptology Conference (pp. 597-612). Springer Berlin Heidelberg.


No comments:

Post a Comment