384 research outputs found

    New Techniques for Obfuscating Conjunctions

    Get PDF
    A conjunction is a function f(x1,,xn)=iSlif(x_1,\dots,x_n) = \bigwedge_{i \in S} l_i where S[n]S \subseteq [n] and each lil_i is xix_i or ¬xi\neg x_i. Bishop et al. (CRYPTO 2018) recently proposed obfuscating conjunctions by embedding them in the error positions of a noisy Reed-Solomon codeword and encoding the codeword in a group exponent. They prove distributional virtual black box (VBB) security in the generic group model for random conjunctions where S0.226n|S| \geq 0.226n. While conjunction obfuscation was known from LWE due to Wichs and Zirdelis (FOCS 2017) and Goyal et al. (FOCS 2017), these constructions rely on substantial technical machinery. In this work, we conduct an extensive study of simple conjunction obfuscation techniques. - We abstract the Bishop et al. scheme to obtain an equivalent yet more efficient dual\u27\u27 scheme that can handle conjunctions over exponential size alphabets. This scheme admits a straightforward proof of generic group security, which we combine with a novel combinatorial argument to obtain distributional VBB security for S|S| of any size. - If we replace the Reed-Solomon code with a random binary linear code, we can prove security from standard LPN and avoid encoding in a group. This addresses an open problem posed by Bishop et al. to prove security of this simple approach in the standard model. - We give a new construction that achieves information theoretic distributional VBB security and weak functionality preservation for Snnδ|S| \geq n - n^\delta and δ<1\delta < 1. Assuming discrete log and δ<1/2\delta < 1/2, we satisfy a stronger notion of functionality preservation for computationally bounded adversaries while still achieving information theoretic security

    Obfuscating Conjunctions under Entropic Ring LWE

    Get PDF
    We show how to securely obfuscate conjunctions, which are functions f(x[subscript 1], . . . , x[subscript n]) = ∧[subscript i∈I] y[superscript i] where I ⊆ [n] and each literal y[subscript i] is either just x[subscript i] or ¬x[subscript i] e.g., f(x[subscript 1], . . . , x_n) = x[subscript 1] ⊆ ¬ x[subscript 3] ⊆ ¬ x[subscript 7] · · · ⊆ x[subscript n−1]. Whereas prior work of Brakerski and Rothblum (CRYPTO 2013) showed how to achieve this using a non-standard object called cryptographic multilinear maps, our scheme is based on an “entropic” variant of the Ring Learning with Errors (Ring LWE) assumption. As our core tool, we prove that hardness assumptions on the recent multilinear map construction of Gentry, Gorbunov and Halevi (TCC 2015) can be established based on entropic Ring LWE. We view this as a first step towards proving the security of additional multilinear map based constructions, and in particular program obfuscators, under standard assumptions. Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals nothing more than black-box access to f as an oracle, at least as long as (essentially) the conjunction is chosen from a distribution having sufficient entropy

    GOTCHA Password Hackers!

    Full text link
    We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and Humans Apart) as a way of preventing automated offline dictionary attacks against user selected passwords. A GOTCHA is a randomized puzzle generation protocol, which involves interaction between a computer and a human. Informally, a GOTCHA should satisfy two key properties: (1) The puzzles are easy for the human to solve. (2) The puzzles are hard for a computer to solve even if it has the random bits used by the computer to generate the final puzzle --- unlike a CAPTCHA. Our main theorem demonstrates that GOTCHAs can be used to mitigate the threat of offline dictionary attacks against passwords by ensuring that a password cracker must receive constant feedback from a human being while mounting an attack. Finally, we provide a candidate construction of GOTCHAs based on Inkblot images. Our construction relies on the usability assumption that users can recognize the phrases that they originally used to describe each Inkblot image --- a much weaker usability assumption than previous password systems based on Inkblots which required users to recall their phrase exactly. We conduct a user study to evaluate the usability of our GOTCHA construction. We also generate a GOTCHA challenge where we encourage artificial intelligence and security researchers to try to crack several passwords protected with our scheme.Comment: 2013 ACM Workshop on Artificial Intelligence and Security (AISec

    Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption

    Get PDF
    An affine determinant program ADP: {0,1}^n &#8594; {0,1} is specified by a tuple (A,B_1,...,B_n) of square matrices over F_q and a function Eval: F_q &#8594; {0,1}, and evaluated on x \in {0,1}^n by computing Eval(det(A + sum_{i \in [n]} x_i B_i)). In this work, we suggest ADPs as a new framework for building general-purpose obfuscation and witness encryption. We provide evidence to suggest that constructions following our ADP-based framework may one day yield secure, practically feasible obfuscation. As a proof-of-concept, we give a candidate ADP-based construction of indistinguishability obfuscation (iO) for all circuits along with a simple witness encryption candidate. We provide cryptanalysis demonstrating that our schemes resist several potential attacks, and leave further cryptanalysis to future work. Lastly, we explore practically feasible applications of our witness encryption candidate, such as public-key encryption with near-optimal key generation

    Obfuscated Fuzzy Hamming Distance and Conjunctions from Subset Product Problems

    Get PDF
    We consider the problem of obfuscating programs for fuzzy matching (in other words, testing whether the Hamming distance between an nn-bit input and a fixed nn-bit target vector is smaller than some predetermined threshold). This problem arises in biometric matching and other contexts. We present a virtual-black-box (VBB) secure and input-hiding obfuscator for fuzzy matching for Hamming distance, based on certain natural number-theoretic computational assumptions. In contrast to schemes based on coding theory, our obfuscator is based on computational hardness rather than information-theoretic hardness, and can be implemented for a much wider range of parameters. The Hamming distance obfuscator can also be applied to obfuscation of matching under the 1\ell_1 norm on Zn\mathbb{Z}^n. We also consider obfuscating conjunctions. Conjunctions are equivalent to pattern matching with wildcards, which can be reduced in some cases to fuzzy matching. Our approach does not cover as general a range of parameters as other solutions, but it is much more compact. We study the relation between our obfuscation schemes and other obfuscators and give some advantages of our solution

    Obfuscating Finite Automata

    Get PDF
    We construct a VBB and perfect circuit-hiding obfuscator for evasive deterministic finite automata using a matrix encoding scheme with a limited zero-testing algorithm. We construct the matrix encoding scheme by extending an existing matrix FHE scheme. Using obfuscated DFAs we can for example evaluate secret regular expressions or disjunctive normal forms on public inputs. In particular, the possibility of evaluating regular expressions solves the open problem of obfuscated substring matching

    Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers

    Full text link
    Machine Learning (ML) algorithms are used to train computers to perform a variety of complex tasks and improve with experience. Computers learn how to recognize patterns, make unintended decisions, or react to a dynamic environment. Certain trained machines may be more effective than others because they are based on more suitable ML algorithms or because they were trained through superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. While much research has been performed about the privacy of the elements of training sets, in this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. This kind of information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights
    corecore