25 research outputs found

    Differentially Oblivious Database Joins: Overcoming the Worst-Case Curse of Fully Oblivious Algorithms

    Get PDF
    Numerous high-profile works have shown that access patterns to even encrypted databases can leak secret information and sometimes even lead to reconstruction of the entire database. To thwart access pattern leakage, the literature has focused on oblivious algorithms, where obliviousness requires that the access patterns leak nothing about the input data. In this paper, we consider the Join operator, an important database primitive that has been extensively studied and optimized. Unfortunately, any fully oblivious Join algorithm would require always padding the result to the worst-case length which is quadratic in the data size N. In comparison, an insecure baseline incurs only O(R + N) cost where R is the true result length, and in the common case in practice, R is relatively short. As a typical example, when R = O(N), any fully oblivious algorithm must inherently incur a prohibitive, N-fold slowdown relative to the insecure baseline. Indeed, the (non-private) database and algorithms literature invariably focuses on studying the instance-specific rather than worst-case performance of database algorithms. Unfortunately, the stringent notion of full obliviousness precludes the design of efficient algorithms with non-trivial instance-specific performance. To overcome this worst-case performance barrier of full obliviousness and enable algorithms with good instance-specific performance, we consider a relaxed notion of access pattern privacy called (?, ?)-differential obliviousness (DO), originally proposed in the seminal work of Chan et al. (SODA\u2719). Rather than insisting that the access patterns leak no information whatsoever, the relaxed DO notion requires that the access patterns satisfy (?, ?)-differential privacy. We show that by adopting the relaxed DO notion, we can obtain efficient database Join mechanisms whose instance-specific performance approximately matches the insecure baseline, while still offering a meaningful notion of privacy to individual users. Complementing our upper bound results, we also prove new lower bounds regarding the performance of any DO Join algorithm. Differential obliviousness (DO) is a new notion and is a relatively unexplored territory. Following the pioneering investigations by Chan et al. and others, our work is among the very first to formally explore how DO can help overcome the worst-case performance curse of full obliviousness; moreover, we motivate our work with database applications. Our work shows new evidence why DO might be a promising notion, and opens up several exciting future directions

    Manta: Privacy Preserving Decentralized Exchange

    Get PDF
    Cryptocurrencies and decentralized ledger technology has been widely adopted over the last decades. However, there isn’t yet a decentralized exchange that protects users’ privacy from end to end. In this paper, we construct the first ledger-based decentralized token exchange with strong privacy guarantees. We propose the first Decentralized Anonymous eXchange scheme (DAX scheme) based on automated market maker (AMM) and zkSNARK and present a formal definition of its security and privacy properties

    Manta: a Plug and Play Private DeFi Stack

    Get PDF
    We propose Manta, a plug and play private DeFi stack that consists of MantaDAP, a multi-asset decentralized anonymous payment scheme and MantaDAX, an automated market maker(AMM) based decentralized anonymous exchange scheme. Compared with existing privacy preserving cryptocurrencies such as Zcash and Monero,Manta supports multiple base assets and allows the privatized assets to be exchanged anonymously via MantaDAX. We think this is a major step forward towards building a privacy preserving DeFi stack. Thanks to the efficiency of modern NIZKs (non-interactive zero-knowledge proof systems) and our carefully crafted design,Manta is efficient: our benchmarks reports a 15 second, off-line zero-knowledge proof (ZKP) generation time, and a 6 millisecond, on-line proof verification time

    Adore: Differentially Oblivious Relational Database Operators

    Full text link
    There has been a recent effort in applying differential privacy on memory access patterns to enhance data privacy. This is called differential obliviousness. Differential obliviousness is a promising direction because it provides a principled trade-off between performance and desired level of privacy. To date, it is still an open question whether differential obliviousness can speed up database processing with respect to full obliviousness. In this paper, we present the design and implementation of three new major database operators: selection with projection, grouping with aggregation, and foreign key join. We prove that they satisfy the notion of differential obliviousness. Our differentially oblivious operators have reduced cache complexity, runtime complexity, and output size compared to their state-of-the-art fully oblivious counterparts. We also demonstrate that our implementation of these differentially oblivious operators can outperform their state-of-the-art fully oblivious counterparts by up to 7.4×7.4\times.Comment: VLDB 202

    ZEN: An Optimizing Compiler for Verifiable, Zero-Knowledge Neural Network Inferences

    Get PDF
    We present ZEN, the first optimizing compiler that generates efficient verifiable, zero-knowledge neural network inference schemes. ZEN generates two schemes: ZENacc_{acc} and ZENinfer_{infer}. ZENacc_{acc} proves the accuracy of a committed neural network model; ZENinfer_{infer} proves a specific inference result. Used in combination, these verifiable computation schemes ensure both the privacy of the sensitive user data as well as the confidentiality of the neural network models. However, directly using these schemes on zkSNARKs requires prohibitive computational cost. As an optimizing compiler, ZEN introduces two kinds of optimizations to address this issue: first, ZEN incorporates a new neural network quantization algorithm that incorporate two R1CS friendly optimizations which makes the model to be express in zkSNARKs with less constraints and minimal accuracy loss; second, ZEN introduces a SIMD style optimization, namely stranded encoding, that can encoding multiple 8bit integers in large finite field elements without overwhelming extraction cost. Combining these optimizations, ZEN produces verifiable neural network inference schemes with 5.43∼22.19×{\bf 5.43} \sim {\bf 22.19} \times (15.35×15.35 \times on average) less R1CS constraints
    corecore