218 research outputs found

    Delegated Private Matching for Compute

    Get PDF
    Private matching for compute (PMC) establishes a match between two datasets owned by mutually distrusted parties (CC and PP) and allows the parties to input more data for the matched records for arbitrary downstream secure computation without rerunning the private matching component. The state-of-the-art PMC protocols only support two parties and assume that both parties can participate in computationally intensive secure computation. We observe that such operational overhead limits the adoption of these protocols to solely powerful entities as small data owners or devices with minimal computing power will not be able to participate. We introduce two protocols to delegate PMC from party PP to untrusted cloud servers, called delegates, allowing multiple smaller PP parties to provide inputs containing identifiers and associated values. Our Delegated Private Matching for Compute protocols, called DPMC and Ds_sPMC, establish a join between the datasets of party CC and multiple delegators PP based on multiple identifiers and compute secret shares of associated values for the identifiers that the parties have in common. We introduce a rerandomizable encrypted oblivious pseudorandom function (OPRF) primitive, called EO, which allows two parties to encrypt, mask, and shuffle their data. Note that EO may be of independent interest. Our Ds_sPMC protocol limits the leakages of DPMC by combining our EO scheme and secure three-party shuffling. Finally, our implementation demonstrates the efficiency of our constructions by outperforming related works by approximately 10×10\times for the total protocol execution and by at least 20×20\times for the computation on the delegators

    Solving Satisfiability Modulo Counting for Symbolic and Statistical AI Integration With Provable Guarantees

    Full text link
    Satisfiability Modulo Counting (SMC) encompasses problems that require both symbolic decision-making and statistical reasoning. Its general formulation captures many real-world problems at the intersection of symbolic and statistical Artificial Intelligence. SMC searches for policy interventions to control probabilistic outcomes. Solving SMC is challenging because of its highly intractable nature(NPPP\text{NP}^{\text{PP}}-complete), incorporating statistical inference and symbolic reasoning. Previous research on SMC solving lacks provable guarantees and/or suffers from sub-optimal empirical performance, especially when combinatorial constraints are present. We propose XOR-SMC, a polynomial algorithm with access to NP-oracles, to solve highly intractable SMC problems with constant approximation guarantees. XOR-SMC transforms the highly intractable SMC into satisfiability problems, by replacing the model counting in SMC with SAT formulae subject to randomized XOR constraints. Experiments on solving important SMC problems in AI for social good demonstrate that XOR-SMC finds solutions close to the true optimum, outperforming several baselines which struggle to find good approximations for the intractable model counting in SMC

    Oblivious Transfer with constant computational overhead

    Get PDF
    The computational overhead of a cryptographic task is the asymptotic ratio between the computational cost of securely realizing the task and that of realizing the task with no security at all. Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC 2008) showed that secure two-party computation of Boolean circuits can be realized with constant computational overhead, independent of the desired level of security, assuming the existence of an oblivious transfer (OT) protocol and a local pseudorandom generator (PRG). However, this only applies to the case of semi-honest parties. A central open question in the area is the possibility of a similar result for malicious parties. This question is open even for the simpler task of securely realizing many instances of a constant-size function, such as OT of bits. We settle the question in the affirmative for the case of OT, assuming: (1) a standard OT protocol, (2) a slightly stronger “correlation-robust" variant of a local PRG, and (3) a standard sparse variant of the Learning Parity with Noise (LPN) assumption. An optimized version of our construction requires fewer than 100 bit operations per party per bit-OT. For 128-bit security, this improves over the best previous protocols by 1–2 orders of magnitude. We achieve this by constructing a constant-overhead pseudorandom correlation generator (PCG) for the bit-OT correlation. Such a PCG generates N pseudorandom instances of bit-OT by locally expanding short, correlated seeds. As a result, we get an end-to-end protocol for generating N pseudorandom instances of bit-OT with o(N) communication, O(N) computation, and security that scales sub-exponentially with N. Finally, we present applications of our main result to realizing other secure computation tasks with constant computational overhead. These include protocols for general circuits with a relaxed notion of security against malicious parties, protocols for realizing N instances of natural constant-size functions, and reducing the main open question to a potentially simpler question about fault-tolerant computation

    Learning Markov Random Fields for Combinatorial Structures via Sampling through Lov\'asz Local Lemma

    Full text link
    Learning to generate complex combinatorial structures satisfying constraints will have transformative impacts in many application domains. However, it is beyond the capabilities of existing approaches due to the highly intractable nature of the embedded probabilistic inference. Prior works spend most of the training time learning to separate valid from invalid structures but do not learn the inductive biases of valid structures. We develop NEural Lov\'asz Sampler (Nelson), which embeds the sampler through Lov\'asz Local Lemma (LLL) as a fully differentiable neural network layer. Our Nelson-CD embeds this sampler into the contrastive divergence learning process of Markov random fields. Nelson allows us to obtain valid samples from the current model distribution. Contrastive divergence is then applied to separate these samples from those in the training set. Nelson is implemented as a fully differentiable neural net, taking advantage of the parallelism of GPUs. Experimental results on several real-world domains reveal that Nelson learns to generate 100\% valid structures, while baselines either time out or cannot ensure validity. Nelson also outperforms other approaches in running time, log-likelihood, and MAP scores.Comment: accepted by AAAI 2023. The first two authors contribute equall

    Transforming numerical feature models into propositional formulas and the universal variability language

    Get PDF
    Real-world Software Product Lines (SPLs) need Numerical Feature Models (NFMs) whose features have not only boolean values that satisfy boolean constraints but also have numeric attributes that satisfy arithmetic constraints. An essential operation on NFMs finds near-optimal performing products, which requires counting the number of SPL products. Typical constraint satisfaction solvers perform poorly on counting and sampling. Nemo (Numbers, features, models) is a tool that supports NFMs by bit-blasting, the technique that encodes arithmetic expressions as boolean clauses. The newest version, Nemo2, translates NFMs to propositional formulas and the Universal Variability Language (UVL). By doing so, products can be counted efficiently by #SAT and Binary Decision Tree solvers, enabling finding near-optimal products. This article evaluates Nemo2 with a large set of synthetic and colossal real-world NFMs, including complex arithmetic constraints and counting and sampling experiments. We empirically demonstrate the viability of Nemo2 when counting and sampling large and complex SPLs.Munoz, Pinto and Fuentes work is supported by the European Union’s H2020 research and innovation programme under grant agreement DAEMON 101017109, by the projects co-financed by FEDER, Spain funds LEIA UMA18-FEDERJA-15, IRIS PID2021- 122812OB-I00 (MCI/AEI), and the PRE2019-087496 grant from the Ministerio de Ciencia e Innovación. Funding for open access charge: Universidad de Málaga / CBUA

    Half-Tree: Halving the Cost of Tree Expansion in COT and DPF

    Get PDF
    GGM tree is widely used in the design of correlated oblivious transfer (COT), subfield vector oblivious linear evaluation (sVOLE), distributed point function (DPF), and distributed comparison function (DCF). Often, the cost associated with GGM tree dominates the computation and communication of these protocols. In this paper, we propose a suite of optimizations that can reduce this cost by half. • Halving the cost of COT and sVOLE. Our COT protocol introduces extra correlation to each level of a GGM tree used by the state-of-the-art COT protocol. As a result, it reduces both the number of AES calls and the communication by half. Extending this idea to sVOLE, we are able to achieve similar improvement with either halved computation or halved communication. • Halving the cost of DPF and DCF. We propose improved two-party protocols for the distributed generation of DPF/DCF keys. Our tree structures behind these protocols lead to more efficient full-domain evaluation and halve the communication and the round complexity of the state-of-the-art DPF/DCF protocols. All protocols are provably secure in the random-permutation model and can be accelerated based on fixed-key AES-NI. We also improve the state-of-the-art schemes of puncturable pseudorandom function (PPRF), DPF, and DCF, which are of independent interest in dealer-available scenarios

    Theoretical Foundations of Adversarially Robust Learning

    Full text link
    Despite extraordinary progress, current machine learning systems have been shown to be brittle against adversarial examples: seemingly innocuous but carefully crafted perturbations of test examples that cause machine learning predictors to misclassify. Can we learn predictors robust to adversarial examples? and how? There has been much empirical interest in this contemporary challenge in machine learning, and in this thesis, we address it from a theoretical perspective. In this thesis, we explore what robustness properties can we hope to guarantee against adversarial examples and develop an understanding of how to algorithmically guarantee them. We illustrate the need to go beyond traditional approaches and principles such as empirical risk minimization and uniform convergence, and make contributions that can be categorized as follows: (1) introducing problem formulations capturing aspects of emerging practical challenges in robust learning, (2) designing new learning algorithms with provable robustness guarantees, and (3) characterizing the complexity of robust learning and fundamental limitations on the performance of any algorithm.Comment: PhD Thesi

    Short Signatures from Regular Syndrome Decoding in the Head

    Get PDF
    We introduce a new candidate post-quantum digital signature scheme from the regular syndrome decoding (RSD) assumption, an established variant of the syndrome decoding assumption which asserts that it is hard to find ww-regular solutions to systems of linear equations over F2\mathbb{F}_2 (a vector is regular if it is a concatenation of ww unit vectors). Our signature is obtained by introducing and compiling a new 5-round zero-knowledge proof system constructed using the MPC-in-the-head paradigm. At the heart of our result is an efficient MPC protocol in the preprocessing model that checks the correctness of a regular syndrome decoding instance by using a share ring-conversion mechanism. The analysis of our construction is non-trivial and forms a core technical contribution of our work. It requires careful combinatorial analysis and combines several new ideas, such as analyzing soundness in a relaxed setting where a cheating prover is allowed to use any witness sufficiently close to a regular vector. We complement our analysis with an in-depth overview of existing attacks against RSD. Our signatures are competitive with the best-known code-based signatures, ranging from 12.5212.52 KB (fast setting, with a signing time of the order of a few milliseconds on a single core of a standard laptop) to about 99 KB (short setting, with estimated signing time of the order of 15ms)

    Cryptalphabet Soup: DPFs meet MPC and ZKPs

    Get PDF
    Secure multiparty computation (MPC) protocols enable multiple parties to collaborate on a computation using private inputs possessed by the different parties in the computation. At the same time, MPC protocols ensure that no participating party learns anything about the other parties’ private inputs beyond what they can infer from the computation’s output and their own inputs. MPC has wide ranging applications for privacy protecting systems. However, these systems have been plagued by limited performance, lack of scalability, and poor accuracy. In this thesis, we demonstrate several novel techniques for using distributed point functions (DPFs) in combination with MPC to obtain significant performance improvements in several different applications. Namely, using novel observations about the structure of the most efficient available DPF construction in the literature, we show that DPF keys from untrusted sources can be checked for correctness using an MPC protocol between the two key holders, with direct applications in sender-anonymous messaging. We expand these observations to produce the most efficient available method to evaluate piecewise-polynomial functions, also known as splines. The scalability and efficiency of this method allows for splines to be used for extremely high accuracy approximation of non-linear functions in MPC. Furthermore, the protocols proposed in this thesis far outperform prior solutions both in large-scale asymptotic measurements and in concrete benchmarks using high-performance software implementations at both small- and large-scale

    Design of Efficient Symmetric-Key Cryptographic Algorithms

    Get PDF
    兵庫県立大学大学院202
    corecore