8,333 research outputs found
Arya: Nearly linear-time zero-knowledge proofs for correct program execution
There have been tremendous advances in reducing interaction, communication and verification time in zero-knowledge proofs but it remains an important challenge to make the prover efficient. We construct the first zero-knowledge proof of knowledge for the correct execution of a program on public and private inputs where the prover computation is nearly linear time. This saves a polylogarithmic factor in asymptotic performance compared to current state of the art proof systems.
We use the TinyRAM model to capture general purpose processor computation. An instance consists of a TinyRAM program and public inputs. The witness consists of additional private inputs to the program. The prover can use our proof system to convince the verifier that the program terminates with the intended answer within given time and memory bounds. Our proof system has perfect completeness, statistical special honest verifier zero-knowledge, and computational knowledge soundness assuming linear-time computable collision-resistant hash functions exist. The main advantage of our new proof system is asymptotically efficient prover computation. The prover’s running time is only a superconstant factor larger than the program’s running time in an apples-to-apples comparison where the prover uses the same TinyRAM model. Our proof system is also efficient on the other performance parameters; the verifier’s running time and the communication are sublinear in the execution time of the program and we only use a log-logarithmic number of rounds
State of the Art Report: Verified Computation
This report describes the state of the art in verifiable computation. The
problem being solved is the following:
The Verifiable Computation Problem (Verifiable Computing Problem) Suppose we
have two computing agents. The first agent is the verifier, and the second
agent is the prover. The verifier wants the prover to perform a computation.
The verifier sends a description of the computation to the prover. Once the
prover has completed the task, the prover returns the output to the verifier.
The output will contain proof. The verifier can use this proof to check if the
prover computed the output correctly. The check is not required to verify the
algorithm used in the computation. Instead, it is a check that the prover
computed the output using the computation specified by the verifier. The effort
required for the check should be much less than that required to perform the
computation.
This state-of-the-art report surveys 128 papers from the literature
comprising more than 4,000 pages. Other papers and books were surveyed but were
omitted. The papers surveyed were overwhelmingly mathematical. We have
summarised the major concepts that form the foundations for verifiable
computation. The report contains two main sections. The first, larger section
covers the theoretical foundations for probabilistically checkable and
zero-knowledge proofs. The second section contains a description of the current
practice in verifiable computation. Two further reports will cover (i) military
applications of verifiable computation and (ii) a collection of technical
demonstrators. The first of these is intended to be read by those who want to
know what applications are enabled by the current state of the art in
verifiable computation. The second is for those who want to see practical tools
and conduct experiments themselves.Comment: 54 page
Efficient proofs of software exploitability for real-world processors
CRAhttps://eprint.iacr.org/2022/1223.pdfPublished versio
Petuum: A New Platform for Distributed Machine Learning on Big Data
What is a systematic way to efficiently apply a wide spectrum of advanced ML
programs to industrial scale problems, using Big Models (up to 100s of billions
of parameters) on Big Data (up to terabytes or petabytes)? Modern
parallelization strategies employ fine-grained operations and scheduling beyond
the classic bulk-synchronous processing paradigm popularized by MapReduce, or
even specialized graph-based execution that relies on graph representations of
ML programs. The variety of approaches tends to pull systems and algorithms
design in different directions, and it remains difficult to find a universal
platform applicable to a wide range of ML programs at scale. We propose a
general-purpose framework that systematically addresses data- and
model-parallel challenges in large-scale ML, by observing that many ML programs
are fundamentally optimization-centric and admit error-tolerant,
iterative-convergent algorithmic solutions. This presents unique opportunities
for an integrative system design, such as bounded-error network synchronization
and dynamic scheduling based on ML program structure. We demonstrate the
efficacy of these system designs versus well-known implementations of modern ML
algorithms, allowing ML programs to run in much less time and at considerably
larger model sizes, even on modestly-sized compute clusters.Comment: 15 pages, 10 figures, final version in KDD 2015 under the same titl
Guaranteeing correctness in privacy-friendly outsourcing by certificate validation
With computation power in the cloud becoming a commodity, it is more and more convenient to outsource computations to external computation parties. Assuring confidentiality, even of inputs by mutually distrusting inputters, is possible by distributing computations between different parties using multiparty computation. Unfortunately, this typically only guarantees correctness if a limited number of computation parties are malicious. If correctness is needed when all computation parties are malicious, then one currently needs either fully homomorphic encryption or ``universally verifiable'' multiparty computation; both are impractical for large computations. In this paper, we show for the first time how to achieve practical privacy-friendly outsourcing with correctness guarantees, by using normal multiparty techniques to compute the result of a computation, and then using slower verifiable techniques only to verify that this result was correct. We demonstrate the feasibility of our approach in a linear programming case study. Keywords: secret sharing , threshold cryptography, zero knowledg
- …