1,058 research outputs found

    On the Efficiency of Classical and Quantum Secure Function Evaluation

    Full text link
    We provide bounds on the efficiency of secure one-sided output two-party computation of arbitrary finite functions from trusted distributed randomness in the statistical case. From these results we derive bounds on the efficiency of protocols that use different variants of OT as a black-box. When applied to implementations of OT, these bounds generalize most known results to the statistical case. Our results hold in particular for transformations between a finite number of primitives and for any error. In the second part we study the efficiency of quantum protocols implementing OT. While most classical lower bounds for perfectly secure reductions of OT to distributed randomness still hold in the quantum setting, we present a statistically secure protocol that violates these bounds by an arbitrarily large factor. We then prove a weaker lower bound that does hold in the statistical quantum setting and implies that even quantum protocols cannot extend OT. Finally, we present two lower bounds for reductions of OT to commitments and a protocol based on string commitments that is optimal with respect to both of these bounds

    Conclave: secure multi-party computation on big data (extended TR)

    Full text link
    Secure Multi-Party Computation (MPC) allows mutually distrusting parties to run joint computations without revealing private data. Current MPC algorithms scale poorly with data size, which makes MPC on "big data" prohibitively slow and inhibits its practical use. Many relational analytics queries can maintain MPC's end-to-end security guarantee without using cryptographic MPC techniques for all operations. Conclave is a query compiler that accelerates such queries by transforming them into a combination of data-parallel, local cleartext processing and small MPC steps. When parties trust others with specific subsets of the data, Conclave applies new hybrid MPC-cleartext protocols to run additional steps outside of MPC and improve scalability further. Our Conclave prototype generates code for cleartext processing in Python and Spark, and for secure MPC using the Sharemind and Obliv-C frameworks. Conclave scales to data sets between three and six orders of magnitude larger than state-of-the-art MPC frameworks support on their own. Thanks to its hybrid protocols, Conclave also substantially outperforms SMCQL, the most similar existing system.Comment: Extended technical report for EuroSys 2019 pape

    Structures and lower bounds for binary covering arrays

    Full text link
    A qq-ary tt-covering array is an m×nm \times n matrix with entries from {0,1,...,q1}\{0, 1, ..., q-1\} with the property that for any tt column positions, all qtq^t possible vectors of length tt occur at least once. One wishes to minimize mm for given tt and nn, or maximize nn for given tt and mm. For t=2t = 2 and q=2q = 2, it is completely solved by R\'enyi, Katona, and Kleitman and Spencer. They also show that maximal binary 2-covering arrays are uniquely determined. Roux found the lower bound of mm for a general t,nt, n, and qq. In this article, we show that m×nm \times n binary 2-covering arrays under some constraints on mm and nn come from the maximal covering arrays. We also improve the lower bound of Roux for t=3t = 3 and q=2q = 2, and show that some binary 3 or 4-covering arrays are uniquely determined.Comment: 16 page

    Efficient data restructuring and aggregation for I/O acceleration in PIDX

    Get PDF
    pre-printHierarchical, multiresolution data representations enable interactive analysis and visualization of large-scale simulations. One promising application of these techniques is to store high performance computing simulation output in a hierarchical Z (HZ) ordering that translates data from a Cartesian coordinate scheme to a one-dimensional array ordered by locality at different resolution levels. However, when the dimensions of the simulation data are not an even power of 2, parallel HZ ordering produces sparse memory and network access patterns that inhibit I/O performance. This work presents a new technique for parallel HZ ordering of simulation datasets that restructures simulation data into large (power of 2) blocks to facilitate efficient I/O aggregation. We perform both weak and strong scaling experiments using the S3D combustion application on both Cray-XE6 (65,536 cores) and IBM Blue Gene/P (131,072 cores) platforms. We demonstrate that data can be written in hierarchical, multiresolution format with performance competitive to that of native data-ordering methods

    Efficient k-out-ofn oblivious transfer schemes,”

    Get PDF
    Abstract: Oblivious transfer is an important cryptographic protocol in various security applications. For example, in on-line transactions, a k-out-of-n oblivious transfer scheme allows a buyer to privately choose k out of n digital goods from a merchant without learning information about other n−k goods. In this paper, we propose several efficient two-round k-out-of-n oblivious transfer schemes, in which the receiver R sends O(k) messages to the sender S, and S sends O(n) messages back to R. The schemes provide unconditional security for either sender or receiver. The computational security for the other side is based on the Decisional Diffie-Hellman (DDH) or Chosen-Target Computational Diffie-Hellman (CT-CDH) problems. Our schemes have the nice property of universal parameters, that is, each pair of R and S need not hold any secret before performing the protocol. The system parameters can be used by all senders and receivers without any trapdoor specification. In some cases, our OT k n schemes are the most efficient ones in terms of the communication cost, either in rounds or the number of messages. Moreover, one of our schemes is extended to an adaptive oblivious transfer scheme. In that scheme, S sends O(n) messages to R in one round in the commitment phase

    On Unconditionally Secure Distributed Oblivious Transfer.

    Get PDF
    This paper is about the Oblivious Transfer in the distributed model proposed by M. Naor and B. Pinkas. In this setting a Sender has n secrets and a Receiver is interested in one of them. During a set up phase, the Sender gives information about the secrets to m Servers. Afterwards, in a recovering phase, the Receiver can compute the secret she wishes by interacting with any k of them. More precisely, from the answers received she computes the secret in which she is interested but she gets no information on the others and, at the same time, any coalition of k − 1 Servers can neither compute any secret nor figure out which one the Receiver has recovered. We present an analysis and new results holding for this model: lower bounds on the resources required to implement such a scheme (i.e., randomness, memory storage, communication complexity); some impossibility results for one-round distributed oblivi- ous transfer protocols; two polynomial-based constructions implementing 1-out-of-n dis- tributed oblivious transfer, which generalize and strengthen the two constructions for 1-out-of-2 given by Naor and Pinkas; as well as new one-round and two-round distributed oblivious transfer protocols, both for threshold and general access structures on the set of Servers, which are optimal with respect to some of the given bounds. Most of these constructions are basically combinatorial in nature
    corecore