# Alessandro Chiesa

#### alexch [at] berkeley [dot] edu

I am an assistant professor in the EECS Department at UC Berkeley.
I am affiliated with the Theory, Cryptography, and Security research groups.
I am an author of libsnark, a C++ library for zkSNARKs, which are zero knowledge proofs that are very short and easy to verify.
I am a co-inventor of Zerocash and co-founder of Zcash [ ], both of which rely on libsnark.
I am a co-founder of StarkWare Industries.

## Research Interests

I am broadly interested in Theoretical Computer Science and Computer Security.
Specific interests include theoretical and applied cryptography, complexity theory, mechanism design, and others.

## Program Committees

• CRYPTO 2019 (38th International Cryptology Conference)
• S&P 2019 (40th IEEE Symposium on Security and Privacy)
• EUROCRYPT 2018 (37th IACR International Conference on the Theory and Applications of Cryptographic Techniques)
• TCC 2017 (15th IACR Theory of Cryptography Conference)
• ITCS 2017 (8th Innovations in Theoretical Computer Science Conference)
• TCC 2016-B (14th IACR Theory of Cryptography Conference)
• CCS 2016 (23rd ACM Conference on Computer and Communications Security)

## Publications (DBLP, Scholar)

### Preprints

• [P3] On Local Testability in the Non-Signaling Setting
Alessandro Chiesa, Peter Manohar, Igor Shinkar

#### Abstract

Non-signaling strategies are a generalization of quantum strategies that have been studied in physics for decades, and have recently found applications in theoretical computer science. These applications motivate the study of local-to-global phenomena for non-signaling functions.

We present general results about the local testability of linear codes in the non-signaling setting. Our contributions include formulating natural definitions that capture the condition that a non-signaling function "belongs" to a given code, and characterizing the sets of local constraints that imply membership in the code. We prove these results by relating the Fourier spectrum of non-signaling functions to Cayley hypergraphs induced by local constraints.

We apply the above results to show a separation between locally testable codes in the classical and non-signaling setting by proving that bivariate low-degree testing fails spectacularly in the non-signaling setting. Specifically, we show that there exist non-signaling functions that pass bivariate low-degree tests with probability 1, and yet are maximally far from low-degree.

Available on ECCC.

• [P2] ZEXE: Enabling Decentralized Private Computation
Sean Bowe, Alessandro Chiesa, Matthew D. Green, Ian Miers, Pratyush Mishra, Howard Wu
Cryptology ePrint Archive, Report 2018/962

#### Abstract

Ledger-based systems that support rich applications often suffer from two limitations. First, validating a transaction requires re-executing the state transition that it attests to. Second, transactions not only reveal which application had a state transition but also reveal the application's internal state.

We design, implement, and evaluate ZEXE, a ledger-based system where users can execute offline computations and subsequently produce transactions, attesting to the correctness of these computations, that satisfy two main properties. First, transactions hide all information about the offline computations. Second, transactions can be validated in constant time by anyone, regardless of the offline computation.

The core of ZEXE is a construction for a new cryptographic primitive that we introduce, decentralized private computation (DPC) schemes. In order to achieve an efficient implementation of our construction, we leverage tools in the area of cryptographic proofs, including succinct zero knowledge proofs and recursive proof composition. Overall, transactions in ZEXE are 968 bytes regardless of the offline computation, and generating them takes less than a minute plus a time that grows with the offline computation.

We demonstrate how to use ZEXE to realize privacy-preserving analogues of popular applications: private decentralized exchanges for user-defined fungible assets and regulation-friendly private stablecoins.

Available on ePrint. The source code is available here.

• [P1] A Zero Knowledge Sumcheck and its Applications
Alessandro Chiesa, Michael A. Forbes, Nicholas Spooner
Cryptology ePrint Archive, Report 2017/305

#### Abstract

Many seminal results in Interactive Proofs (IPs) use algebraic techniques based on low-degree polynomials, the study of which is pervasive in theoretical computer science. Unfortunately, known methods for endowing such proofs with zero knowledge guarantees do not retain this rich algebraic structure.

In this work, we develop algebraic techniques for obtaining zero knowledge variants of proof protocols in a way that leverages and preserves their algebraic structure. Our constructions achieve unconditional (perfect) zero knowledge in the Interactive Probabilistically Checkable Proof (IPCP) model of Kalai and Raz [KR08] (the prover first sends a PCP oracle, then the prover and verifier engage in an Interactive Proof in which the verifier may query the PCP).

Our main result is a zero knowledge variant of the sumcheck protocol [LFKN92] in the IPCP model. The sumcheck protocol is a key building block in many IPs, including the protocol for polynomial-space computation due to Shamir [Sha92], and the protocol for parallel computation due to Goldwasser, Kalai, and Rothblum [GKR15]. A core component of our result is an algebraic commitment scheme, whose hiding property is guaranteed by algebraic query complexity lower bounds [AW09,JKRS09]. This commitment scheme can then be used to considerably strengthen our previous work [BCFGRS16] that gives a sumcheck protocol with much weaker zero knowledge guarantees, itself using algebraic techniques based on algorithms for polynomial identity testing [RS05,BW04].

We demonstrate the applicability of our techniques by deriving zero knowledge variants of well-known protocols based on algebraic techniques. First, we construct zero knowledge IPCPs for NEXP starting with the Multi-prover Interactive Proofs of Babai, Fortnow, and Lund [BFL91]. This result is a direct application of our zero knowledge sumcheck and our algebraic commitment scheme, augmented with the use of `randomized' low-degree extensions.

We also construct protocols in a more restricted model where the prover and verifier engage in a standard Interactive Proof with oracle access to a uniformly random low-degree polynomial (soundness holds with respect to any oracle). In this setting we achieve zero knowledge variants of the protocols of Shamir and of Goldwasser, Kalai, and Rothblum.

Available on ePrint.

### Conference Publications

• [C32] Succinct Arguments in the Quantum Random Oracle Model
Alessandro Chiesa, Peter Manohar, Nicholas Spooner
TCC 2019 (17th Theory of Cryptography Conference)

#### Abstract

Succinct non-interactive arguments (SNARGs) are highly efficient certificates of membership in non-deterministic languages. Constructions of SNARGs in the random oracle model are widely believed to be post-quantum secure, provided the oracle is instantiated with a suitable post-quantum hash function. No formal evidence, however, supports this belief.

In this work we provide the first such evidence by proving that the SNARG construction of Micali is unconditionally secure in the quantum random oracle model. We also prove that, analogously to the classical case, the SNARG inherits the zero knowledge and proof of knowledge properties of the PCP underlying the Micali construction. We thus obtain the first zero knowledge SNARG of knowledge (zkSNARK) that is secure in the quantum random oracle model.

Our main tool is a new lifting lemma that shows how, for a rich class of oracle games, we can generically deduce security against quantum attackers by bounding a natural classical property of these games. This means that in order to prove our theorem we only need to establish classical properties about the Micali construction. This approach not only lets us prove post-quantum security but also enables us to prove explicit bounds that are tight up to small factors.

We additionally use our techniques to prove that SNARGs based on interactive oracle proofs (IOPs) with round-by-round soundness are unconditionally secure in the quantum random oracle model. This result establishes the post-quantum security of many SNARGs of practical interest.

Available on ePrint.

• [C31] Linear-Size Constant-Query IOPs for Delegating Computation
Eli Ben-Sasson, Alessandro Chiesa, Lior Goldberg, Tom Gur, Michael Riabzev, Nicholas Spooner
TCC 2019 (17th Theory of Cryptography Conference)

#### Abstract

TBA

Available online soon.

• [C30] Aurora: Transparent Succinct Arguments for R1CS
Eli Ben-Sasson, Alessandro Chiesa, Michael Riabzev, Nicholas Spooner, Madars Virza, Nicholas P. Ward
EUROCRYPT 2019 (38th International Conference on the Theory and Applications of Cryptographic Techniques)

#### Abstract

We design, implement, and evaluate a zkSNARK for Rank-1 Constraint Satisfaction (R1CS), a widely-deployed NP-complete language that is undergoing standardization. Our construction uses a transparent setup, is plausibly post-quantum secure, and uses lightweight cryptography. A proof attesting to the satisfiability of n constraints has size $O(\log^2 n)$; it can be produced with $O(n \log n)$ field operations and verified with $O(n)$. At 128 bits of security, proofs are less than 130kB even for several million constraints, more than 20x shorter than prior zkSNARK with similar features.

A key ingredient of our construction is a new Interactive Oracle Proof (IOP) for solving a univariate analogue of the classical sumcheck problem [LFKN92], originally studied for multivariate polynomials. Our protocol verifies the sum of entries of a Reed--Solomon codeword over any subgroup of a field.

We also provide libiop, an open-source library for writing IOP-based arguments, in which a toolchain of transformations enables programmers to write new arguments by writing simple IOP sub-components. We have used this library to specify our construction and prior ones.

Available on ePrint.

• [C29] Probabilistic Checking against Non-Signaling Strategies from Linearity Testing
Alessandro Chiesa, Peter Manohar, Igor Shinkar
ITCS 2019 (10th Conference on Innovations in Theoretical Computer Science)

#### Abstract

Non-signaling strategies are a generalization of quantum strategies that have been studied in physics over the past three decades. Recently, they have found applications in theoretical computer science, including to proving inapproximability results for linear programming and to constructing protocols for delegating computation. A central tool for these applications is probabilistically checkable proofs (PCPs) that are sound against non-signaling strategies.

In this paper we prove that the exponential-length constant-query PCP construction due to Arora et al. (JACM 1998) is sound against non-signaling strategies.

Our result offers a new length-vs-query tradeoff when compared to the non-signaling PCP of Kalai et al. (STOC 2013 and 2014) and, moreover, provides additional evidence to the conjecture that a non-signaling analogue of the PCP Theorem holds.

Available on ECCC.

• [C28] Spatial Isolation Implies Zero Knowledge Even in a Quantum World
FOCS 2018 (59th IEEE Symposium on Foundations of Computer Science)
QIP 2019 (22nd Conference on Quantum Information Processing)

#### Abstract

Zero knowledge plays a central role in cryptography and complexity. The seminal work of Ben-Or et al. (STOC 1988) shows that zero knowledge can be achieved unconditionally for any language in NEXP, as long as one is willing to make a suitable physical assumption: if the provers are spatially isolated, then they can be assumed to be playing independent strategies.

Quantum mechanics, however, tells us that this assumption is unrealistic, because spatially-isolated provers could share a quantum entangled state and realize a non-local correlated strategy. The MIP* model captures this setting.

In this work we study the following question: does spatial isolation still suffice to unconditionally achieve zero knowledge even in the presence of quantum entanglement?

We answer this question in the affirmative: we prove that every language in NEXP has a 2-prover zero knowledge interactive proof that is sound against entangled provers; that is, NEXP is in ZK-MIP*.

Our proof consists of constructing a zero knowledge interactive PCP with a strong algebraic structure, and then lifting it to the MIP* model. This lifting relies on a new framework that builds on recent advances in low-degree testing against entangled strategies, and clearly separates classical and quantum tools.

Our main technical contribution consists of developing new algebraic techniques for obtaining unconditional zero knowledge; this includes a zero knowledge variant of the celebrated sumcheck protocol, a key building block in many probabilistic proof systems. A core component of our sumcheck protocol is a new algebraic commitment scheme, whose analysis relies on algebraic complexity theory.

Available on ECCC.

• [C27] DIZK: A Distributed Zero Knowledge Proof System
Howard Wu, Wenting Zheng, Alessandro Chiesa, Raluca A. Popa, Ion Stoica
USENIX Security 2018 (27th USENIX Security Symposium)

#### Abstract

Recently there has been much academic and industrial interest in practical implementations of zero knowledge proofs. These techniques allow a party to prove to another party that a given statement is true without revealing any additional information. In a Bitcoin-like system, this allows a payer to prove validity of a payment without disclosing the payment's details.

Unfortunately, the existing systems for generating such proofs are very expensive, especially in terms of memory overhead. Worse yet, these systems are "monolithic", so they are limited by the memory resources of a single machine. This severely limits their practical applicability.

We describe DIZK, a system that distributes the generation of a zero knowledge proof across machines in a compute cluster. Using a set of new techniques, we show that DIZK scales to computations of up to billions of logical gates (100x larger than prior art) at a cost of 10μs per gate (100x faster than prior art). We then use DIZK to study various security applications.

Available on ePrint. The corresponding source code is on GitHub.

• [C26] Testing Linearity against Non-Signaling Strategies
Alessandro Chiesa, Peter Manohar, Igor Shinkar
CCC 2018 (33rd Computational Complexity Conference)

#### Abstract

Non-signaling strategies are collections of distributions with certain non-local correlations. They have been studied in Physics as a strict generalization of quantum strategies to understand the power and limitations of Nature's apparent non-locality. Recently, they have received attention in Theoretical Computer Science due to connections to Complexity and Cryptography.

We initiate the study of Property Testing against non-signaling strategies, focusing first on the classical problem of linearity testing (Blum, Luby, and Rubinfeld; JCSS 1993). We prove that any non-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions.

Quasi-distributions generalize the notion of probability distributions over global objects (such as functions) by allowing negative probabilities, while at the same time requiring that "local views" follow standard distributions (with non-negative probabilities). Quasi-distributions arise naturally in the study of Quantum Mechanics as a tool to describe various non-local phenomena.

Our analysis of the linearity test relies on Fourier analytic techniques applied to quasi-distributions. Along the way, we also establish general equivalences between non-signaling strategies and quasi-distributions, which we believe will provide a useful perspective on the study of Property Testing against non-signaling strategies beyond linearity testing.

Available on ECCC.

• [C25] Oblix: An Efficient Oblivious Search Index
Pratyush Mishra, Rishabh Poddar, Jerry Chen, Alessandro Chiesa, Raluca A. Popa
S&P 2018 (39th IEEE Symposium on Security and Privacy)

#### Abstract

Search indices are fundamental building blocks of many systems, and there is great interest in running them on encrypted data. Unfortunately, many known schemes that enable search queries on encrypted data achieve efficiency at the expense of security, as they reveal access patterns to the encrypted data.

In this paper we present Oblix, a search index for encrypted data that is oblivious (provably hides access patterns), is dynamic (supports inserts and deletes), and has good efficiency.

Oblix relies on a combination of novel oblivious-access techniques and recent hardware enclave platforms (e.g., Intel SGX). In particular, a key technical contribution is the design and implementation of doubly-oblivious data structures, in which the client's accesses to its internal memory are oblivious, in addition to accesses to its external memory at the server. These algorithms are motivated by hardware enclaves like SGX, which leak access patterns to both internal and external memory.

We demonstrate the usefulness of Oblix in several applications: private contact discovery for Signal, private retrieval of public keys for Key Transparency, and searchable encryption that hides access patterns and result sizes.

The full version of the paper is forthcoming.

• [C24] Proofs of Proximity for Distribution Testing
Alessandro Chiesa, Tom Gur
ITCS 2018 (9th Conference on Innovations in Theoretical Computer Science)

#### Abstract

Distribution testing is an area of property testing that studies algorithms that receive few samples from a probability distribution D and decide whether D has a certain property or is far (in total variation distance) from all distributions with that property. Most natural properties of distributions, however, require a large number of samples to test, which motivates the question of whether there are natural settings wherein fewer samples suffice.

We initiate a study of proofs of proximity for properties of distributions. In their basic form, these proof systems consist of a tester that not only has sample access to a distribution but also explicit access to a proof string that depends on the distribution. We refer to these as NP distribution testers, or MA distribution testers if the tester is a probabilistic algorithm. We also study the more general notion of IP distribution testers, in which the tester interacts with an all-powerful untrusted prover.

We investigate the power and limitations of proofs of proximity for distributions and chart a landscape that, surprisingly, is significantly different from that of proofs of proximity for functions. Our main results include showing that MA distribution testers can be quadratically stronger than standard distribution testers, but no stronger than that; in contrast, IP distribution testers can be exponentially stronger than standard distribution testers, but when restricted to public coins they can be at best quadratically stronger.

Available on ECCC.

• [C23] Zero Knowledge Protocols from Succinct Constraint Detection
Eli Ben-Sasson, Alessandro Chiesa, Michael A. Forbes, Ariel Gabizon, Michael Riabzev, Nicholas Spooner
TCC 2017 (15th Theory of Cryptography Conference)

#### Abstract

We study the problem of constructing proof systems that achieve both soundness and zero knowledge unconditionally (without relying on intractability assumptions). Known techniques for this goal are primarily combinatorial, despite the fact that constructions of interactive proofs (IPs) and probabilistically checkable proofs (PCPs) heavily rely on algebraic techniques to achieve their properties.

We present simple and natural modifications of well-known "algebraic" IP and PCP protocols that achieve unconditional (perfect) zero knowledge in recently introduced models, overcoming limitations of known techniques.

1. We modify the PCP of Ben-Sasson and Sudan [BS08] to obtain zero knowledge for NEXP in the model of Interactive Oracle Proofs [BCS16,RRR16], where the verifier, in each round, receives a PCP from the prover.

2. We modify the IP of Lund, Fortnow, Karloff, and Nisan [LFKN92] to obtain zero knowledge for #P in the model of Interactive PCPs [KR08], where the verifier first receives a PCP from the prover and then interacts with him.

The simulators in our zero knowledge protocols rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the succinct constraint detection problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes:

1. An algorithm to detect constraints for Reed--Muller codes of exponential length. This algorithm exploits the Raz--Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge.

2. An algorithm to detect constraints for PCPs of Proximity of Reed--Solomon codes [BS08] of exponential degree. This algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are locally'' spanned by a small number of small-support constraints.

Available on ePrint.

• [C22] On Axis-Parallel Tests for Tensor Product Codes
Alessandro Chiesa, Peter Manohar, Igor Shinkar
RANDOM 2017 (21st International Workshop on Randomization and Computation)

#### Abstract

Many low-degree tests examine the input function via its restrictions to random hyperplanes of a certain dimension. Examples include the line-vs-line (Arora, Sudan 2003), plane-vs-plane (Raz, Safra 1997), and cube-vs-cube (Bhangale, Dinur, Livni 2017) tests.

In this paper we study a test introduced by Ben-Sasson and Sudan in 2006 that only considers restrictions along axis-parallel hyperplanes. While such a test is necessarily "weaker", it works for a more general class of codes, namely tensor product codes. Moreover, axis-parallel tests play a key role in constructing LTCs with inverse polylogarithmic rate and short PCPs (Ben-Sasson, Sudan 2008; Meir 2010). We present two results on axis-parallel tests.

(1) Bivariate low-degree testing with low-agreement.

We prove an analogue of the Bivariate Low-Degree Testing Theorem of Polishchuk and Spielman (1994) in the low-agreement regime, albeit with much larger field size. Namely, for the 2-wise tensor product of the Reed--Solomon code, we prove that for sufficiently large fields, the 2-query variant of the axis-parallel line test (row-vs-column test) works for arbitrarily small agreement. Prior analyses of axis-parallel tests assumed high agreement, and no results for such tests in the low-agreement regime were known.

Our proof technique deviates significantly from that of Polishchuk and Spielman, which relies on algebraic methods such as Bézout's Theorem, and instead leverages a fundamental result in extremal graph theory by Kövári, Sós, and Turán. To our knowledge, this is the first time this result is used in the context of low-degree testing.

(2) Improved robustness for tensor product codes.

Robustness is a strengthening of local testability that underlies many applications. We prove that the axis-parallel hyperplane test for the m-wise tensor product of a linear code with block length n and distance d is Omega(d^m/n^m)-robust. This improves on a theorem of Viderman (2012) by a factor of 1/\poly(m). While the improvement is not large, we believe that our proof is a notable simplification compared to prior work.

Available on ECCC.

• [C21] Interactive Oracle Proofs with Constant Rate and Query Complexity
Eli Ben-Sasson, Alessandro Chiesa, Ariel Gabizon, Michael Riabzev, Nicholas Spooner
ICALP 2017 (44th International Colloquium on Automata, Languages, and Programming)

#### Abstract

We study interactive oracle proofs (IOPs) [BCS16,RRR16], which combine aspects of probabilistically checkable proofs (PCPs) and interactive proofs (IPs). We present IOP constructions and general techniques that enable us to obtain tradeoffs in proof length versus query complexity that are not known to be achievable via PCPs or IPs alone. Our main results are:

1. Circuit satisfiability has 3-round IOPs with linear proof length (counted in bits) and constant query complexity.
2. Reed--Solomon codes have 2-round IOPs of proximity with linear proof length and constant query complexity.
3. Tensor product codes have 1-round IOPs of proximity with sublinear proof length and constant query complexity.

For all of the above, known PCP constructions give quasilinear proof length and constant query complexity [BS08,Dinur07]. In addition, for circuit satisfiability, [BKKMS13] obtain PCPs with linear proof length but sublinear query complexity. As in [BKKMS13], we rely on algebraic-geometry codes to obtain our first result; but, unlike [BKKMS13], our use of such codes is much "lighter" because we do not rely on any automorphisms of the code.

We obtain our results in a modular way by proving and combining "IOP-analogues" of two fundamental tools:

Interactive proof composition: Proof composition is used to reduce the query complexity of PCP verifiers, but the proof length of the composed proof system depends exponentially on the randomness complexity of the outer proof system. We prove a composition theorem for IOPs that does not suffer from this inefficiency.

Sublinear sumcheck: The sumcheck protocol is an IP that enables an IP verifier to check the sum of values of a low-degree polynomial on an exponentially-large hypercube, but the verifier running time depends linearly on the individual degree. We prove a sumcheck protocol for IOPs that does not suffer from this inefficiency.

Overall, our work demonstrates that even constant-round IOPs are more efficient than known PCPs and IPs.

Available on ePrint.

• [C20] Decentralized Anonymous Micropayments
EUROCRYPT 2017 (36th International Conference on the Theory and Applications of Cryptographic Techniques)

#### Abstract

Micropayments (payments worth a few pennies) have numerous potential applications. A challenge in achieving them is that payment networks charge fees that are high compared to "micro" sums of money.

Wheeler (1996) and Rivest (1997) proposed probabilistic payments as a technique to achieve micropayments: a merchant receives a macro-value payment with a given probability so that, in expectation, he receives a micro-value payment. Despite much research and trial deployment, micropayment schemes have not seen adoption, partly because a trusted party is required to process payments and resolve disputes.

The widespread adoption of decentralized currencies such as Bitcoin (2009) suggests that decentralized micropayment schemes are easier to deploy. Pass and Shelat (2015) proposed several micropayment schemes for Bitcoin, but their schemes provide no more privacy guarantees than Bitcoin itself, whose transactions are recorded in plaintext in a public ledger.

We formulate and construct decentralized anonymous micropayment (DAM) schemes, which enable parties with access to a ledger to conduct offline probabilistic payments with one another, directly and privately. Our techniques extend those of Zerocash (2014) with a new privacy-preserving probabilistic payment protocol. One of the key ingredients of our construction is fractional message transfer (FMT), a primitive that enables probabilistic message transmission between two parties, and for which we give an efficient instantiation.

Double spending in our setting cannot be prevented. Our second contribution is an economic analysis that bounds the additional utility gain of any cheating strategy, and applies to virtually any probabilistic payment scheme with offline validation. In our construction, this bound allows us to deter double spending by way of advance deposits that are revoked when cheating is detected.

Available on ePrint.

• [C19] Computational Integrity with a Public Random String from Quasi-linear PCPs
EUROCRYPT 2017 (36th International Conference on the Theory and Applications of Cryptographic Techniques)

#### Abstract

A party running a computation remotely may benefit from misreporting its output, say, to lower its tax. Cryptographic protocols that detect and prevent such falsities hold the promise to enhance the security of decentralized systems with stringent computational integrity requirements, like Bitcoin [Nak09]. To gain public trust it is imperative to use publicly verifiable protocols that have no “backdoors” and which can be set up using only a short public random string. Probabilistically Checkable Proof (PCP) systems [BFL90,BFLS91,AS98,ALMSS98] can be used to construct astonishingly efficient protocols [Kil92, Mic00] of this nature but some of the main components of such systems — proof composition [AS98] and low-degree testing via PCPs of Proximity (PCPPs) [BGHSV05,DR06] — have been considered efficient only asymptotically, for unrealistically large computations; recent cryptographic alternatives [PGHR13, BCGCTV13a] suffer from a non-public setup phase.

This work introduces SCI, the first implementation of a scalable PCP system (that uses both PCPPs and proof composition). We used SCI to prove correctness of executions of up to 220 cycles of a simple processor and calculated its break-even point [SVPBBW12, SMBW12]. The significance of our findings is two-fold: (i) it marks the transition of core PCP techniques (like proof composition and PCPs of Proximity) from mathematical theory to practical system engineering, and (ii) the thresholds obtained are nearly achievable and hence show that PCP-supported computational integrity is closer to reality than previously assumed.

Available on ePrint.

• [C18] Interactive Oracle Proofs
Eli Ben-Sasson, Alessandro Chiesa, Nicholas Spooner
TCC 2016-B (14th Theory of Cryptography Conference)

#### Abstract

We initiate the study of a proof system model that naturally combines two well-known models: interactive proofs (IPs) and probabilistically-checkable proofs (PCPs). An interactive oracle proof (IOP) is an interactive proof in which the verifier is not required to read the prover's messages in their entirety; rather, the verifier has oracle access to the prover's messages, and may probabilistically query them.

IOPs simultaneously generalize IPs and PCPs. Thus, IOPs retain the expressiveness of PCPs, capturing NEXP rather than only PSPACE, and also the flexibility of IPs, allowing multiple rounds of communication with the prover. These degrees of freedom allow for more efficient "PCP-like" interactive protocols, because the prover does not have to compute the parts of a PCP that are not requested by the verifier.

As a first investigation into IOPs, we offer two main technical contributions. First, we give a compiler that maps any public-coin IOP into a non-interactive proof in the random oracle model. We prove that the soundness of the resulting proof is tightly characterized by the soundness of the IOP against state restoration attacks, a class of rewinding attacks on the IOP verifier. Our compiler preserves zero knowledge, proof of knowledge, and time complexity of the underlying IOP. As an application, we obtain blackbox unconditional ZK proofs in the random oracle model with quasilinear prover and polylogarithmic verifier, improving on the result of Ishai et al.\ (2015).

Second, we study the notion of state-restoration soundness of an IOP: we prove tight upper and lower bounds in terms of the IOP's (standard) soundness and round complexity; and describe a simple adversarial strategy that is optimal across all state restoration attacks.

Our compiler can be viewed as a generalization of the Fiat--Shamir paradigm for public-coin IPs (CRYPTO~'86), and of the "CS proof" constructions of Micali (FOCS~'94) and Valiant (TCC~'08) for PCPs. Our analysis of the compiler gives, in particular, a unified understanding of all of these constructions, and also motivates the study of state restoration attacks, not only for IOPs, but also for IPs and PCPs.

Available on ePrint.

• [C17] Quasilinear-Size Zero Knowledge from Linear-Algebraic PCPs
TCC 2016-A (13th Theory of Cryptography Conference)

#### Abstract

The seminal result that every language having an interactive proof also has a zero-knowledge interactive proof assumes the existence of one-way functions. Ostrovsky and Wigderson (ISTCS 1993) proved that this assumption is necessary: if one-way functions do not exist, then only languages in BPP have zero-knowledge interactive proofs.

Ben-Or et al. (STOC 1988) proved that, nevertheless, every language having a multi-prover interactive proof also has a zero-knowledge multi-prover interactive proof, unconditionally. Their work led to, among many other things, a line of work studying zero knowledge without intractability assumptions. In this line of work, Kilian, Petrank, and Tardos (STOC 1997) defined and constructed zero-knowledge probabilistically checkable proofs (PCPs).

While PCPs with quasilinear-size proof length, but without zero knowledge, are known, no such result is known for zero knowledge PCPs. In this work, we show how to construct 2-round'' PCPs that are zero knowledge and of length \tilde{O}(K) where K is the number of queries made by a malicious polynomial time verifier. Previous solutions required PCPs of length at least K^6 to maintain zero knowledge. In this model, which we call duplex PCP (DPCP), the verifier first receives an oracle string from the prover, then replies with a message, and then receives another oracle string from the prover; a malicious verifier can make up to K queries in total to both oracles.

Deviating from previous works, our constructions do not invoke the PCP Theorem as a blackbox but instead rely on certain algebraic properties of a specific family of PCPs. We show that if the PCP has a certain linear algebraic structure --- which many central constructions can be shown to possess, including [BFLS91,ALMSS98,BS08] --- we can add the zero knowledge property at virtually no cost (up to additive lower order terms) while introducing only minor modifications in the algorithms of the prover and verifier. We believe that our linear-algebraic characterization of PCPs may be of independent interest, as it gives a simplified way to view previous well-studied PCP constructions.

The full version of the paper is on ePrint.

• [C16] Secure Sampling of Public Parameters for Succinct Zero Knowledge Proofs
S&P 2015 (36th IEEE Symposium on Security and Privacy)

#### Abstract

Non-interactive zero-knowledge proofs (NIZKs) are a powerful cryptographic tool, with numerous potential applications. However, succinct NIZKs (e.g., zk-SNARK schemes) necessitate a trusted party to generate and publish some public parameters, to be used by all provers and verifiers. This party is trusted to correctly run a probabilistic algorithm (specified by the the proof system) that outputs the public parameters, and publish them, without leaking any other information (such as the internal randomness used by the algorithm); violating either requirement may allow malicious parties to produce convincing "proofs" of false statements. This trust requirement poses a serious impediment to deploying NIZKs in many applications, because a party that is trusted by all users of the envisioned system may simply not exist.

In this work, we show how public parameters for a class of NIZKs can be generated by a multi-party protocol, such that if at least one of the parties is honest, then the result is secure (in both aforementioned senses) and can be subsequently used for generating and verifying numerous proofs without any further trust. We design and implement such a protocol, tailored to efficiently support the state-of-the-art NIZK constructions with short and easy-to-verify proofs (Parno et al., IEEE S&P '13; Ben-Sasson et al., USENIX Sec '14; Danezis et al., ASIACRYPT '14). Applications of our system include generating public parameters for systems such as Zerocash (Ben-Sasson et al., IEEE S&P '13) and the scalable zero-knowledge proof system of (Ben-Sasson et al., CRYPTO '14).

The full version of the paper is forthcoming.

• [C15] Cluster Computing in Zero Knowledge
Alessandro Chiesa, Eran Tromer, Madars Virza
EUROCRYPT 2015 (34th International Conference on the Theory and Applications of Cryptographic Techniques)

#### Abstract

Large computations, when amenable to distributed parallel execution, are often executed on computer clusters, for scalability and cost reasons. Such computations are used in many applications, including, to name but a few, machine learning, webgraph mining, and statistical machine translation. Oftentimes, though, the input data is private and only the result of the computation can be published. Zero-knowledge proofs would allow, in such settings, to verify correctness of the output without leaking (additional) information about the input.

In this work, we investigate theoretical and practical aspects of zero-knowledge proofs for cluster computations. We design, build, and evaluate zero-knowledge proof systems for which: (i) a proof attests to the correct execution of a cluster computation; and (ii) generating the proof is itself a cluster computation that is similar in structure and complexity to the original one. Concretely, we focus on MapReduce, an elegant and popular form of cluster computing.

Previous zero-knowledge proof systems can in principle prove a MapReduce computation's correctness, via a monolithic NP statement that reasons about all mappers, all reducers, and shuffling. However, it is not clear how to generate the proof for such monolithic statements via parallel execution by a distributed system. Our work demonstrates, by theory and implementation, that proof generation can be similar in structure and complexity to the original cluster computation.

Our main technique is a bootstrapping theorem for succinct non-interactive arguments of knowledge (SNARKs) that shows how, via recursive proof composition and Proof-Carrying Data, it is possible to transform any SNARK into a distributed SNARK for MapReduce which proves, piecewise and in a distributed way, the correctness of every step in the original MapReduce computation as well as their global consistency.

The full version of the paper is on ePrint.

• [C14] Scalable Zero Knowledge via Cycles of Elliptic Curves
Eli Ben-Sasson, Alessandro Chiesa, Eran Tromer, Madars Virza
CRYPTO 2014 (34th International Cryptology Conference)

#### Abstract

Non-interactive zero-knowledge proofs of knowledge for general NP statements are a powerful cryptographic primitive, both in theory and in practical applications. Recently, much research has focused on achieving an additional property, succinctness, requiring the proof to be very short and easy to verify. Such proof systems are known as zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs), and are desired when communication is expensive, or the verifier is computationally weak.

Existing zk-SNARK implementations have severe scalability limitations, in terms of space complexity as a function of the size of the computation being proved (e.g., running time of the NP statement's decision program). First, the size of the proving key is quasilinear in the upper bound on the computation size. Second, producing a proof requires "writing down" all intermediate values of the entire computation, and then conducting global operations such as FFTs.

The bootstrapping technique of Bitansky et al. (STOC '13), following Valiant (TCC '08), offers an approach to scalability, by recursively composing proofs: proving statements about acceptance of the proof system's own verifier (and correctness of the program's latest step). Alas, recursive composition of known zk-SNARKs has never been realized in practice, due to enormous computational cost.

Using new elliptic-curve cryptographic techniques, and methods for exploiting the proof systems' field structure and nondeterminism, we achieve the first zk-SNARK implementation that practically achieves recursive proof composition. Our zk-SNARK implementation runs random-access machine programs and produces proofs of their correct execution, on today's hardware, for any program running time. It takes constant time to generate the keys that support all computation sizes. Subsequently, the proving process only incurs a constant multiplicative overhead compared to the original computation's time, and an essentially-constant additive overhead in memory. Thus, our zk-SNARK implementation is the first to have a well-defined, albeit low, clock rate of "verified instructions per second".

The full version of the paper is on ePrint.

• [C13] Succinct Non-Interactive Zero Knowledge for a von Neumann Architecture
Eli Ben-Sasson, Alessandro Chiesa, Eran Tromer, Madars Virza
USENIX Security 2014 (23rd USENIX Security Symposium)

#### Abstract

We build a system that provides succinct non-interactive zero-knowledge proofs (zk-SNARKs) for program executions on a von Neumann RISC architecture. The system has two components: a cryptographic proof system for verifying satisfiability of arithmetic circuits, and a circuit generator to translate program executions to such circuits. Our design of both components improves in functionality and efficiency over prior work, as follows.

Our circuit generator is the first to be universal: it does not need to know the program, but only a bound on its running time. Moreover, the size of the output circuit depends additively (rather than multiplicatively) on program size, allowing verification of larger programs.

The cryptographic proof system improves proving and verification times, by leveraging new algorithms and a pairing library tailored to the protocol.

We evaluated our system for programs with up to 10,000 instructions, running for up to 32,000 machine steps, each of which can arbitrarily access random-access memory; and also demonstrated it executing programs that use just-in-time compilation. Our proofs are 230 bytes long at 80 bits of security, or 288 bytes long at 128 bits of security. Typical verification time is 5 milliseconds, regardless of the original program's running time.

The full version of the paper is on ePrint.

• [C12] Knightian Self Uncertainty in the VCG Mechanism for Unrestricted Combinatorial Auctions
Alessandro Chiesa, Silvio Micali, Zeyuan Allen Zhu
EC 2014 (15th ACM Conference on Economics and Computation)

#### Abstract

We study the social welfare performance of the VCG mechanism in the well-known and challenging model of self uncertainty initially put forward by Frank H. Knight and later formalized by Truman F. Bewley. Namely, the only information that each player i has about his own true valuation consists of a set of distributions, from one of which i's valuation has been drawn.

We assume that each player knows his true valuation up to an additive inaccuracy d, and study the social welfare performance of the VCG mechanism relative to d > 0. In this paper, we focus on the social welfare performance of the VCG mechanism in unrestricted combinatorial auctions, both in undominated strategies and regret-minimizing strategies. Denote by MSW the maximum social welfare.

Our first theorem proves that, in an n-player m-good combinatorial auction, the VCG mechanism may produce outcomes whose social welfare is <= MSW - Ω(2^m d), even when n=2 and each player chooses an undominated strategy. We also geometrically characterize the set of undominated strategies in this setting.

Our second theorem shows that the VCG mechanism performs well in regret-minimizing strategies: the guaranteed social welfare is >= MSW - 2min{m,n}d if each player chooses a pure regret-minimizing strategy, and >= MSW - O(n^2 d) if mixed strategies are allowed.

Finally, we prove a lemma bridging two standard models of rationality: utility maximization and regret minimization. A special case of our lemma implies that, in any game (Knightian or not), every implementation for regret-minimizing players also applies to utility-maximizing players who use regret only to break ties among their undominated strategies. This bridging lemma thus implies that the VCG mechanism continues to perform very well also for the latter players.

The full version of this paper is here.

• [C11] Zerocash: Decentralized Anonymous Payments from Bitcoin
S&P 2014 (35th IEEE Symposium on Security and Privacy)

#### Abstract

Bitcoin is the first digital currency to see widespread adoption. While payments are conducted between pseudonyms, Bitcoin cannot offer strong privacy guarantees: payment transactions are recorded in a public decentralized ledger, from which much information can be deduced. Zerocoin (Miers et al., IEEE S&P 2013) tackles some of these privacy issues by unlinking transactions from the payment's origin. Yet, it still reveals payments' destinations and amounts, and is limited in functionality.

In this paper, we construct a full-fledged ledger-based digital currency with strong privacy guarantees. Our results leverage recent advances in zero-knowledge Succinct Non-interactive ARguments of Knowledge (zk-SNARKs).

First, we formulate and construct decentralized anonymous payment schemes (DAP schemes). A DAP scheme enables users to directly pay each other privately: the corresponding transaction hides the payment's origin, destination, and transferred amount. We provide formal definitions and proofs of the construction's security.

Second, we build Zerocash, a practical instantiation of our DAP scheme construction. In Zerocash, transactions are less than 1 kB and take under 6 ms to verify --- orders of magnitude more efficient than the less-anonymous Zerocoin and competitive with plain Bitcoin.

The full version of the paper is on ePrint. See the Zerocash project website.

• [C10] SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge
CRYPTO 2013 (33rd International Cryptology Conference)

#### Abstract

An argument system for NP is a proof system that allows efficient verification of NP statements, given proofs produced by an untrusted yet computationally-bounded prover. Such a system is non-interactive and publicly-verifiable if, after a trusted party publishes a proving key and a verification key, anyone can use the proving key to generate non-interactive proofs for adaptively-chosen NP statements, and proofs can be verified by anyone by using the verification key.

We present an implementation of a publicly-verifiable non-interactive argument system for NP. The system, moreover, is a zero-knowledge proof-of-knowledge. It directly proves correct executions of programs on TinyRAM, a random-access machine tailored for efficient verification of nondeterministic computations. Given a program $P$ and time bound T, the system allows for proving correct execution of $P$, on any input $x$, for up to T steps, after a one-time setup requiring $\tilde{O}(|P| T)$ cryptographic operations. An honest prover requires $\tilde{O}(|P| T)$ cryptographic operations to generate such a proof, while proof verification can be performed with only $O(|x|)$ cryptographic operations. This system can be used to prove the correct execution of C programs, using our TinyRAM port of the GCC compiler.

This yields a zero-knowledge Succinct Non-interactive ARgument of Knowledge (zk-SNARK) for program executions in the preprocessing model -- a powerful solution for delegating NP computations, with several features not achieved by previously-implemented primitives.

Our approach builds on recent theoretical progress in the area. We present efficiency improvements and implementations of two main ingredients:

1. Given a C program, we produce a circuit whose satisfiability encodes the correctness of execution of the program. Leveraging nondeterminism, the generated circuit's size is merely quasilinear in the size of the computation. In particular, we efficiently handle arbitrary and data-dependent loops, control flow, and memory accesses. This is in contrast with existing "circuit generators", which in the general case produce circuits of quadratic size.

2. Given a linear PCP for verifying satisfiability of circuits, we produce a corresponding SNARK. We construct such a linear PCP (which, moreover, is zero-knowledge and very efficient) by building on and improving on recent work on quadratic arithmetic programs.

The full version of the paper is on ePrint. An MIT News article about this paper can be found here.

• [C09] On the Concrete Efficiency of Probabilistically-Checkable Proofs
STOC 2013 (45th ACM Symposium on the Theory of Computing)

#### Abstract

Probabilistically-Checkable Proofs (PCPs) form the algorithmic core that enables fast verification of long computations in many cryptographic constructions. Yet, despite the wonderful asymptotic savings they bring, PCPs are also the infamous computational bottleneck preventing these powerful cryptographic constructions from being used in practice. To address this problem, we present several results about the computational efficiency of PCPs.

We construct the first PCP where the prover and verifier time complexities are quasi-optimal (i.e., optimal up to poly-logarithmic factors). The prover and verifier are also higly-parallelizable, and these computational guarantees hold even when proving and verifying the correctness of random-access machine computations. Our construction is explicit and has the requisite properties for being used in the cryptographic applications mentioned above.

Next, to better understand the efficiency of our PCP, we propose a new efficiency measure for PCPs (and their major components, locally-testable codes and PCPs of proximity). We define a concrete-efficiency threshold that indicates the smallest problem size beyond which the PCP becomes “useful”, in the sense that using it is cheaper than performing naive verification (i.e., rerunning the computation); our definition accounts for both the prover and verifier complexity.

We then show that our PCP has a finite concrete-efficiency threshold. That such a PCP exists does not follow from existing works on PCPs with polylogarithmic-time verifiers.

As in [Ben-Sasson and Sudan, STOC '05], PCPs of proximity for Reed–Solomon (RS) codes are the main component of our PCP. We construct a PCP of proximity that reduces the concrete-efficiency threshold for testing proximity to RS codes from 2^{683} in their work to 2^{43}, which is tantalizingly close to practicality.

The full version of the paper is on ECCC. The Mathematica file with concrete-efficiency threshold computations is here.

• [C08] Recursive Composition and Bootstrapping for SNARKs and Proof-Carrying Data
Nir Bitansky, Ran Canetti, Alessandro Chiesa, Eran Tromer
STOC 2013 (45th ACM Symposium on the Theory of Computing)

#### Abstract

Succinct non-interactive arguments (SNARGs) enable verifying NP statements with much lower complexity than required for classical NP verification (in fact, with complexity that is independent of the NP language at hand). In particular, SNARGs provide strong solutions to the problem of verifiably delegating computation.

Despite recent progress in the understanding and construction of SNARGs, there remain unattained goals. First, publicly-verifiable SNARGs are only known either in the random oracle model, or in a model that allows expensive offline preprocessing. Second, known SNARGs require from the prover significantly more time or space than required for classical NP verification.

We show that, assuming collision-resistant hashing, any SNARG having a natural proof of knowledge property (i.e., a SNARK) can be "bootstrapped" to obtain a complexity-preserving SNARK, i.e., one without expensive preprocessing and where the prover's time and space complexity is essentially the same as that required for classical NP verification. By applying our transformation to known publicly-verifiable SNARKs with expensive preprocessing, we obtain the first publicly-verifiable complexity-preserving SNARK in the plain model (and in particular, eliminate the expensive preprocessing), thereby attaining the aforementioned goals. We also show an analogous transformation for privately-verifiable SNARKs, assuming fully-homomorphic encryption. Curiously, our transformations do not rely on PCPs.

At the heart of our transformations is recursive composition of SNARKs and, more generally, new techniques for constructing and using proof-carrying data (PCD) systems, which extend the notion of a SNARK to the distributed setting. Concretely, to bootstrap a given SNARK, we recursively compose the SNARK to obtain a "weak" PCD system for shallow distributed computations, and then use the PCD framework to attain stronger, complexity-preserving SNARKs and PCD systems.

The full version of the paper is on ePrint.

• [C07] Succinct Non-Interactive Arguments via Linear Interactive Proofs
TCC 2013 (10th Theory of Cryptography Conference)

#### Abstract

Succinct non-interactive arguments (SNARGs) enable verifying NP statements with lower complexity than required for classical NP verification. Traditionally, the focus has been on minimizing the length of such arguments; nowadays researchers have focused also on minimizing verification time, by drawing motivation from the problem of delegating computation.

A common relaxation is a preprocessing SNARG, which allows the verifier to conduct an expensive offline phase that is independent of the statement to be proven later. Recent constructions of preprocessing SNARGs have achieved attractive features: they are publicly-verifiable, proofs consist of only O(1) encrypted (or encoded) field elements, and verification is via arithmetic circuits of size linear in the NP statement. Additionally, these constructions seem to have "escaped the hegemony" of probabilistically-checkable proofs (PCPs) as a basic building block of succinct arguments.

We present a general methodology for the construction of preprocessing SNARGs, as well as resulting concrete efficiency improvements. Our contribution is three-fold:

1. We introduce and study a natural extension of the interactive proof model that considers algebraically-bounded provers; this new setting is analogous to the common study of algebraically-bounded "adversaries" in other fields, such as pseudorandomness and randomness extraction. More concretely, in this work we focus on linear (or affine) provers, and provide several constructions of (succinct two-message) linear-interactive proofs (LIPs) for NP. Our constructions are based on general transformations applied to both linear PCPs (LPCPs) and traditional "unstructured" PCPs.

2. We give conceptually simple cryptographic transformations from LIPs to preprocessing SNARGs, whose security can be based on different forms of linear targeted malleability (implied by previous knowledge assumptions). Our transformations convert arbitrary (two-message) LIPs into designated-verifier SNARGs, and LIPs with degree-bounded verifiers into publicly-verifiable SNARGs. We also extend our methodology to obtain zero-knowledge LIPs and SNARGs. Our techniques yield SNARGs of knowledge and thus can benefit from known recursive composition and bootstrapping techniques.

3. Following this methodology, we exhibit several constructions achieving new efficiency features, such as "single-ciphertext preprocessing SNARGs" and improved succinctness-soundness tradeoffs. We also offer a new perspective on existing constructions of preprocessing SNARGs, revealing a direct connection of these to LPCPs and LIPs.

The full version of the paper is on ePrint.

• [C06] Fast Reductions from RAMs to Delegatable Succinct Constraint Satisfaction Problems
ITCS 2013 (4th Conference on Innovations in Theoretical Computer Science)

#### Abstract

Succinct arguments for NP are proof systems that allow a weak verifier to retroactively check computation done by a powerful prover. Constructions of such protocols prove membership in languages consisting of very large yet succinctly-represented constraint satisfaction problems that, alas, are unnatural in the sense that the problems that arise in practice are not in such form.

For general computation tasks, the most natural representation is typically as random-access machine (RAM) algorithms, because such a representation can be obtained very efficiently by applying a compiler to code written in a high-level programming language. Thus, understanding the efficiency of reductions from RAM computations to other NP-complete problem representations for which succinct arguments (or proofs) are known is a prerequisite to a more complete understanding of the applicability of these arguments.

Existing succinct argument constructions rely either on circuit satisfiability or (in PCP-based constructions) on algebraic constraint satisfaction problems. In this paper, we present new and more efficient reductions from RAM (and parallel RAM) computations to both problems that (a) preserve succinctness (i.e., do not "unroll" the computation of a machine), (b) preserve zero-knowledge and proof-of-knowledge properties, and (c) enjoy fast and highly-parallelizable algorithms for transforming a witness to the RAM computation into a witness for the corresponding problem. These additional properties are typically not considered in "classical" complexity theory but are often required or very desirable in the application of succinct arguments.

Fulfilling all these efficiency requirements poses significant technical challenges, and we develop a set of tools (both unconditional and leveraging computational assumptions) for generically and efficiently structuring and arithmetizing RAM computations for use in succinct arguments. More generally, our results can be applied to proof systems for NP relying on the aforementioned problem representations; these include various zero-knowledge proof constructions.

The full version of the paper is on ePrint.

• [C05] Succinct Arguments from Multi-Prover Interactive Proofs and their Efficiency Benefits
Nir Bitansky, Alessandro Chiesa
CRYPTO 2012 (32nd International Cryptology Conference)

#### Abstract

Succinct arguments of knowledge are computationally-sound proofs of knowledge for NP where the verifier's running time is independent of the time complexity t of the nondeterministic NP machine M that decides the given language. Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilistically-checkable proofs (PCPs). Yet, even when instantiating these constructions with state-of-the-art PCPs, the prover needs Ω(t) space in order to run in quasilinear time (i.e., time t*poly(k)), regardless of the space complexity s of the machine M.

We say that a succinct argument is complexity preserving if the prover runs in time t*poly(k) and space s*poly(k) and the verifier runs in time |x|*poly(k) when proving and verifying that a t-time s-space random-access machine nondeterministically accepts an input x. Do complexity-preserving succinct arguments exist? To study this question, we investigate the alternative approach of constructing succinct arguments based on multi-prover interactive proofs (MIPs) and stronger cryptographic techniques:

(1) We construct a one-round succinct MIP of knowledge, where each prover runs in time t*polylog(t) and space s*polylog(t) and the verifier runs in time |x|*polylog(t).

(2) We show how to transform any one-round MIP protocol to a succinct four-message argument (with a single prover), while preserving the time and space efficiency of the original MIP protocol; using our MIP protocol, this transformation yields a complexity-preserving four-message succinct argument.

As a main tool for our transformation, we define and construct a succinct multi-function commitment that (a) allows the sender to commit to a vector of functions in time and space complexity that are essentially the same as those needed for a single evaluation of the functions, and (b) ensures that the receiver's running time is essentially independent of the function. The scheme is based on fully-homomorphic encryption (and no additional assumptions are needed for our succinct argument).

(3) In addition, we revisit the problem of non-interactive succinct arguments of knowledge (SNARKs), where known impossibilities prevent solutions based on black-box reductions to standard assumptions. We formulate a natural (but non-standard) variant of homomorphic encryption having a homomorphism-extraction property. We show that this primitive essentially allows to squash our interactive protocol, while again preserving time and space efficiency, thereby obtaining a complexity-preserving SNARK. We further show that this variant is, in fact, implied by the existence of (complexity-preserving) SNARKs.

The full version of the paper is on ePrint.

• [C04] From Extractable Collision Resistance to Succinct Non-Interactive Arguments of Knowledge, and Back Again
Nir Bitansky, Ran Canetti, Alessandro Chiesa, Eran Tromer
ITCS 2012 (3rd Conference on Innovations in Theoretical Computer Science)

#### Abstract

The existence of succinct non-interactive arguments for NP (i.e., non-interactive computationally-sound proofs where the verifier's work is essentially independent of the complexity of the NP nondeterministic verifier) has been an intriguing question for the past two decades. Other than CS proofs in the random oracle model [Micali, FOCS '94], the only existing candidate construction is based on an elaborate assumption that is tailored to a specific protocol [Di Crescenzo and Lipmaa, CiE '08].

We formulate a general and relatively natural notion of an extractable collision-resistant hash function (ECRH) and show that, if ECRHs exist, then a modified version of Di Crescenzo and Lipmaa's protocol is a succinct non-interactive argument for NP. Furthermore, the modified protocol is actually a succinct non-interactive adaptive argument of knowledge (SNARK). We then propose several candidate constructions for ECRHs and relaxations thereof.

We demonstrate the applicability of SNARKs to various forms of delegation of computation, to succinct non-interactive zero knowledge arguments, and to succinct two-party secure computation. Finally, we show that SNARKs essentially imply the existence of ECRHs, thus demonstrating the necessity of the assumption.

The full version of the paper is on ePrint.

• [C03] Mechanism Design with Approximate Valuations
Alessandro Chiesa, Silvio Micali, Zeyuan Allen Zhu
ITCS 2012 (3rd Conference on Innovations in Theoretical Computer Science)

#### Abstract

We study single-good auctions in a setting where each player knows his own valuation only within a constant multiplicative factor δ in (0,1), and the mechanism designer knows δ. The classical notions of implementation in dominant strategies and implementation in undominated strategies are naturally extended to this setting, but their power is vastly different.

On the negative side, we prove that no dominant-strategy mechanism can guarantee social welfare that is significantly better than that achievable by assigning the good to a random player.

On the positive side, we provide tight upper and lower bounds for the fraction of the maximum social welfare achievable in undominated strategies, whether deterministically or probabilistically.

The ITCS 2012 version is only an introductory non-archival draft. The full version of the paper, with the title Knightian Auctions, is on the arXiv.

• [C02] Proof-Carrying Data and Hearsay Arguments from Signature Cards
Alessandro Chiesa, Eran Tromer
ITCS 2010 (1st Conference on Innovations in Theoretical Computer Science)

#### Abstract

The security of systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation conducted by untrusted parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and side-channel resilience, and domain-specific invariants.

We propose a new approach, proof-carrying data (PCD), which sidesteps the threat of faults and leakage by reasoning about properties of a computation's output data, regardless of the process that produced it. In PCD, the system designer prescribes the desired properties of a computation's outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system's components. Each such proof attests that the message's data and all of its history comply with the prescribed properties.

We construct a general protocol compiler that generates, propagates, and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short non-interactive arguments (computationally-sound proofs) for statements whose truth depends on "hearsay evidence": previous arguments about other statements. To this end, we attain a particularly strong proof-of-knowledge property.

We realize the above, under standard cryptographic assumptions, in a model where the prover has black- box access to some simple functionality --- essentially, a signature card.

The conference paper can be found here.

• [C01] A Security Analysis of the Boston T
Zackary Anderson, Alessandro Chiesa, Samuel McVeety, Russell Ryan
DEF CON 16 (hacker convention in Las Vegas in 2008)

#### Abstract

In this talk we go over weaknesses in common subway fare collection systems. We focus on the Boston T subway, and show how we reverse engineered the data on magstripe card; we present several attacks to completely break the CharlieCard, a MIFARE Classic smartcard used in many subways around the world; and we discuss physical security problems. We will discuss practical brute force attacks using FPGAs and how to use software-radio to read RFID cards. We survey 'human factors' that lead to weaknesses in the system, and we present a novel new method of hacking WiFi: WARCARTING. We will release several open source tools we wrote in the process of researching these attacks. With live demos, we will demonstrate how we broke these systems.

The talk had to be canceled!

### Journal Publications

• [J6] On Cycles of Pairing-Friendly Elliptic Curves
Alessandro Chiesa, Lynn Chua, Matthew Weidner
SIAM Journal on Applied Algebra and Geometry, Volume 3, Issue 2, pages 175–192, April 2019

#### Abstract

A cycle of elliptic curves is a list of elliptic curves defined over finite fields, such that the number of points on one curve is equal to the size of the field of definition of the next, in a cyclic way. We study cycles of elliptic curves in which every curve is pairing-friendly. These have recently found notable applications in pairing-based cryptography, for instance in improving the scalability of distributed ledger technologies. We construct a new type of cycle of length 4 consisting of MNT curves, and characterize all the possibilities for cycles consisting of MNT curves. We show that long cycles cannot be constructed from families of curves with the same complex multiplication discrimininant, and that cycles consisting of composite order elliptic curves cannot exist. We also show that we cannot construct cycles consisting of curves from only the Freeman or Barreto--Naehrig families.

Available on arXiv as math.NT/1803.02067 and on SIAGA's website.

• [J5] Scalable Zero Knowledge via Cycles of Elliptic Curves
Eli Ben-Sasson, Alessandro Chiesa, Eran Tromer, Madars Virza
Algorithmica, Volume 79, Issue 4, pages 1102-1160, December 2017

#### Abstract

Non-interactive zero-knowledge proofs of knowledge for general NP statements are a powerful cryptographic primitive, both in theory and in practical applications. Recently, much research has focused on achieving an additional property, succinctness, requiring the proof to be very short and easy to verify. Such proof systems are known as zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs), and are desired when communication is expensive, or the verifier is computationally weak.

Existing zk-SNARK implementations have severe scalability limitations, in terms of space complexity as a function of the size of the computation being proved (e.g., running time of the NP statement's decision program). First, the size of the proving key is quasilinear in the upper bound on the computation size. Second, producing a proof requires "writing down" all intermediate values of the entire computation, and then conducting global operations such as FFTs.

The bootstrapping technique of Bitansky et al. (STOC '13), following Valiant (TCC '08), offers an approach to scalability, by recursively composing proofs: proving statements about acceptance of the proof system's own verifier (and correctness of the program's latest step). Alas, recursive composition of known zk-SNARKs has never been realized in practice, due to enormous computational cost.

Using new elliptic-curve cryptographic techniques, and methods for exploiting the proof systems' field structure and nondeterminism, we achieve the first zk-SNARK implementation that practically achieves recursive proof composition. Our zk-SNARK implementation runs random-access machine programs and produces proofs of their correct execution, on today's hardware, for any program running time. It takes constant time to generate the keys that support all computation sizes. Subsequently, the proving process only incurs a constant multiplicative overhead compared to the original computation's time, and an essentially-constant additive overhead in memory. Thus, our zk-SNARK implementation is the first to have a well-defined, albeit low, clock rate of "verified instructions per second".

The full version of the paper is on ePrint.

• [J4] The Hunting of the SNARK
Journal of Cryptology, Volume 30, Issue 4, pages 989-1066, October 2017

#### Abstract

The existence of succinct non-interactive arguments for NP (i.e., non-interactive computationally-sound proofs where the verifier's work is essentially independent of the complexity of the NP nondeterministic verifier) has been an intriguing question for the past two decades. Other than CS proofs in the random oracle model [Micali, FOCS '94], the only existing candidate construction is based on an elaborate assumption that is tailored to a specific protocol [Di Crescenzo and Lipmaa, CiE '08].

We formulate a general and relatively natural notion of an extractable collision-resistant hash function (ECRH) and show that, if ECRHs exist, then a modified version of Di Crescenzo and Lipmaa's protocol is a succinct non-interactive argument for NP. Furthermore, the modified protocol is actually a succinct non-interactive adaptive argument of knowledge (SNARK). We then propose several candidate constructions for ECRHs and relaxations thereof.

We demonstrate the applicability of SNARKs to various forms of delegation of computation, to succinct non-interactive zero knowledge arguments, and to succinct two-party secure computation. Finally, we show that SNARKs essentially imply the existence of ECRHs, thus demonstrating the necessity of the assumption.

Going beyond ECRHs, we formulate the notion of extractable one-way functions (EOWFs). Assuming the existence of a natural variant of EOWFs, we construct a 2-message selective-opening-attack secure commitment scheme and a 3-round zero-knowledge argument of knowledge. Furthermore, if the EOWFs are concurrently extractable, the 3-round zero-knowledge protocol is also concurrent zero-knowledge.

Our constructions circumvent previous black-box impossibility results regarding these protocols by relying on EOWFs as the non-black-box component in the security reductions.

Available on ePrint; this paper is a merge of [BCCT11] and [GLR11].

• [J3] Knightian Analysis of the Vickrey Mechanism
Alessandro Chiesa, Silvio Micali, Zeyuan Allen Zhu
Econometrica, Volume 83, Issue 5, pages 1727-1754, September 2015

#### Abstract

We analyze the Vickrey mechanism for auctions of multiple identical goods when the players have both Knightian uncertainty over their own valuations and incomplete preferences. In this model, the Vickrey mechanism is no longer dominant-strategy, and we prove that all dominant-strategy mechanisms are inadequate. However, we also prove that, in undominated strategies, the social welfare produced by the Vickrey mechanism in the worst case is not only very good, but also essentially optimal.

Find the paper here.

• [J2] Improved Soundness for QMA with Multiple Provers
Alessandro Chiesa, Michael A. Forbes

#### Abstract

We present three contributions to the understanding of QMA with multiple provers:

1. We give a tight soundness analysis of the protocol of [Blier and Tapp, ICQNM '09], yielding a soundness gap Ω(N^{-2}). Our improvement is achieved without the use of an instance with a constant soundness gap (i.e., without using a "PCP").

2. We give a tight soundness analysis of the protocol of [Chen and Drucker, ArXiV '10], thereby improving their result from a "monolithic" protocol where Θ(√N) provers are needed in order to have any soundness gap, to a protocol with a smooth trade-off between the number of provers k and a soundness gap Ω(k^{2}N^{-1}), as long as k >= Ω(log N). (And, when k <= Θ(√N), we recover the original parameters of Chen and Drucker.)

3. We make progress towards an open question of [Aaronson et al., ToC '09] about what kinds of NP-complete problems are amenable to "sublinear" multiple-prover QMA protocols, by observing that a large class of such examples can easily be derived from results already in the PCP literature --- namely, at least the languages recognized by a non-deterministic RAMs in quasilinear time.

Find the paper on the arXiv and on CJTCS.

• [J1] Proof-carrying data: Secure computation on untrusted platforms
Alessandro Chiesa, Eran Tromer

#### Abstract

When running software applications and services, we rely on the underlying execution platform: the hardware and the lower levels of the software stack. The execution platform is susceptible to a wide range of threats, ranging from accidental bugs, faults, and leaks to maliciously induced Trojan horses. The problem is aggravated by growing system complexity and by increasingly pertinent outsourcing and supply chain consideration. Traditional mechanisms, which painstakingly validate all system components, are expensive and limited in applicability. What if the platform assurance problem is just too hard? Do we have any hope of securely running software when we cannot trust the underlying hardware, hypervisor, kernel, libraries, and compilers? This article will discuss a potential approach for doing just so: conducting trustworthy computation on untrusted execution platforms. The approach, proof-carrying data (PCD), circumnavigates the threat of faults and leakage by reasoning solely about properties of a computation's output data, regardless of the process that produced it. In PCD, the system designer prescribes the desired properties of the computation's outputs. These properties are then enforced using cryptographic proofs attached to all data flowing through the system and verified at the system perimeter as well as internal nodes.

Find the paper on NSA's website.

### Theses

• [T2] Succinct Non-Interactive Arguments
Ph.D. thesis (September 2014)
MIT EECS Department

#### Abstract

Succinct non-interactive arguments (SNARGs), also known as "CS proofs" [Micali, FOCS 1994], enable verifying NP statements with much lower complexity than required for classical NP verification (in fact, with complexity that is independent of the NP language at hand). In particular, SNARGs provide strong solutions to the problem of verifiably delegating computation. A common relaxation is a preprocessing SNARG, which allows the verifier to conduct an expensive offline phase, independent of the statement to be proven later.

In this thesis we present two main results:

1. A general methodology for the construction of preprocessing SNARGs.

2. A transformation, based on collision-resistant hashing, that takes any SNARG having a natural proof of knowledge property (i.e., a SNARK) as input and "bootstrapps" it to obtain a complexity-preserving SNARK, i.e., one without expensive preprocessing and where the prover's time and space complexity is essentially the same as that required for classical NP verification.

These results provide the first publicly-verifiable complexity-preserving SNARK in the plain model. At the heart of our transformations is recursive composition of SNARKs and, more generally, new techniques for constructing and using proof-carrying data (PCD) systems, which extend the notion of a SNARK to the distributed setting. Concretely, to bootstrap a given SNARK, we recursively compose the SNARK to obtain a "weak" PCD system for shallow distributed computations, and then use the PCD framework to attain stronger, complexity-preserving SNARKs and PCD systems.

The thesis is available on DSpace@MIT.

• [T1] Proof-Carrying Data
M.Eng. thesis (June 2010)
MIT EECS Department
Advised by Prof. Ronald L. Rivest and Prof. Eran Tromer

#### Abstract

The security of systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation conducted by untrusted parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and side-channel resilience, and domain-specific invariants.

We propose a new approach, proof-carrying data (PCD), which sidesteps the threat of faults and leakage by reasoning about properties of a computation’s output data, regardless of the process that produced it. In PCD, the system designer prescribes the desired properties of a computation’s outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system’s components. Each such proof attests that the message’s data and all of its history comply with the prescribed properties.

We construct a general protocol compiler that generates, propagates, and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short non-interactive arguments (computationally-sound proofs) for statements whose truth depends on "hearsay evidence": previous arguments about other statements. To this end, we attain a particularly strong proof-of-knowledge property.

We realize the above, under standard cryptographic assumptions, in a model where the prover has black- box access to some simple functionality --- essentially, a signature card.

The thesis is available on DSpace@MIT.

## Short Bio

I joined UC Berkeley's faculty in the summer of 2015.
Prior to that, I spent one year as a postdoctoral researcher at ETH Zürich; my host was Prof. Thomas Holenstein.
I earned my Ph.D. in the Theory of Computation group in CSAIL at MIT; my doctoral thesis advisor was Prof. Silvio Micali.
I earned my M.Eng. in the same group; my master's thesis advisors were Prof. Eran Tromer and Prof. Ron Rivest.
I earned my S.B. in Mathematics and my S.B. in Computer Science at MIT; outside of classes and labs, I rowed for the heavyweight varsity crew team at MIT.
A list of my old coursework while at MIT can be found here.
Before coming to MIT, I lived in Varese, Italy, where I was born in 1987. While in Italy, I attended the European School of Varese from kindergarten through high school; this school is part of the system of European Schools, which awards students the European Baccalaureate.
I enjoy many outdoor sports, including biking, climbing, mountaineering, and running.

## Contact

The best way to contact me is via the email at the top of this page.
My office is located in 683 Soda Hall.

I proudly support: