71 research outputs found

    Level Set KSVD

    Full text link
    We present a new algorithm for image segmentation - Level-set KSVD. Level-set KSVD merges the methods of sparse dictionary learning for feature extraction and variational level-set method for image segmentation. Specifically, we use a generalization of the Chan-Vese functional with features learned by KSVD. The motivation for this model is agriculture based. Aerial images are taken in order to detect the spread of fungi in various crops. Our model is tested on such images of cotton fields. The results are compared to other methods.Comment: 25 pages, 14 figures. Submitted to IJC

    On Non-Black-Box Simulation and the Impossibility of Approximate Obfuscation

    Get PDF
    The introduction of a non-black-box simulation technique by Barak (FOCS 2001) has been a major landmark in cryptography, breaking the previous barriers of black-box impossibility. Barak\u27s technique has given rise to various powerful applications and it is a key component in all known protocols with non-black-box simulation. We present the first non-black-box simulation technique that does not rely on Barak\u27s technique (or on nonstandard assumptions). Invoking this technique, we obtain new and improved protocols resilient to various resetting attacks. These improvements include weaker computational assumptions and better round complexity. A prominent feature of our technique is its compatibility with rewinding techniques from classic black-box zero-knowledge protocols. The combination of rewinding with non-black-box simulation has proven instrumental in coping with challenging goals as: simultaneously-resettable zero-knowledge, proofs of knowledge, and resettable-security from one-way functions. While previous works required tailored modifications to Barak\u27s technique, we give a general recipe for combining our technique with rewinding. This yields simplified resettable protocols in the above settings, as well as improvements in round complexity and required computational assumptions. The main ingredient in our technique is a new impossibility result for general program obfuscation. The results extend the impossibility result of Barak et al. (CRYPTO 2001) to the case of obfuscation with approximate functionality; thus, settling a question left open by Barak et al.. In the converse direction, we show a generic transformation from any resettably-sound zero-knowledge protocol to a family of functions that cannot be obfuscated

    On the Impossibility of Approximate Obfuscation and Applications to Resettable Cryptography

    Get PDF
    The traditional notion of {\em program obfuscation} requires that an obfuscation f~\tilde{f} of a program ff computes the exact same function as ff, but beyond that, the code of f~\tilde{f} should not leak any information about ff. This strong notion of {\em virtual black-box} security was shown by Barak et al. (CRYPTO 2001) to be impossible to achieve, for certain {\em unobfuscatable function families}. The same work raised the question of {\em approximate obfuscation}, where the obfuscated f~\tilde{f} is only required to approximate f~\tilde{f}; that is, f~\tilde{f} only agrees with ff on some input distribution. We show that, assuming {\em trapdoor permutations}, there exist families of {\em robust unobfuscatable functions} for which even approximate obfuscation is impossible. That is, obfuscation is impossible even if the obfuscated f~\tilde{f} only agrees with ff with probability slightly more than 12\frac{1}{2}, on a uniformly sampled input (below 12\frac{1}{2}-agreement, the function obfuscated by f~\tilde{f} is not uniquely defined). Additionally, we show that, assuming only one-way functions, we can rule out approximate obfuscation where f~\tilde{f} is not allowed to err, but may refuse to compute ff with probability close to 11. We then demonstrate the power of robust unobfuscatable functions by exhibiting new implications to resettable protocols that so far have been out of our reach. Concretely, we obtain a new non-black-box simulation technique that reduces the assumptions required for resettably-sound zero-knowledge protocols to {\em one-way functions}, as well as reduce round-complexity. We also present a new simplified construction of simultaneously resettable zero-knowledge protocols that does not rely on collision-resistent hashing. Finally, we construct a three-message simultaneously resettable \WI {\em argument of knowledge} (with a non-black-box knowledge extractor). Our constructions are based on a special kind of ``resettable slots that are useful for a non-black-box simulator, but not for a resetting prover

    On the Cryptographic Hardness of Finding a Nash Equilibrium

    Get PDF
    We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and injective one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is known to be complete. Previous proposals for basing PPAD-hardness on program obfuscation considered a strong “virtual black-box” notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps. Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far

    Weak Zero-Knowledge Beyond the Black-Box Barrier

    Get PDF
    The round complexity of zero-knowledge protocols is a long-standing open question, yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zero-knowledge and witness hiding. Protocols satisfying these relaxed notions under standard assumptions have at least four messages, just like full-fledged zero knowledge. The difficulty in improving round complexity stems from a fundamental barrier: none of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box. We introduce a new non-black-box technique and use it to obtain the first protocols that cross this barrier under standard assumptions. Our main results are: \begin{itemize} \item Weak zero-knowledge for NPNP in two messages, assuming quasipolynomially-secure fully-homomorphic encryption and other standard primitives (known from quasipolynomial hardness of Learning with Errors), as well as subexponentially-secure one-way functions. \item Weak zero-knowledge for NPNP in three messages under standard polynomial assumptions (following for example from fully-homomorphic encryption and factoring). \end{itemize} We also give, under polynomial assumptions, a two-message witness-hiding protocol for any language LNPL \in NP that has a witness encryption scheme. This protocol is also publicly verifiable. Our technique is based on a new {\em homomorphic trapdoor paradigm}, which can be seen as a non-black-box analog of the classic Feige-Lapidot-Shamir trapdoor paradigm

    On Removing Graded Encodings from Functional Encryption

    Get PDF
    Functional encryption (FE) has emerged as an outstanding concept. By now, we know that beyond the immediate application to computation over encrypted data, variants with {\em succinct ciphertexts} are so powerful that they yield the full might of indistinguishability obfuscation (IO). Understanding how, and under which assumptions, such succinct schemes can be constructed has become a grand challenge of current research in cryptography. Whereas the first schemes were based themselves on IO, recent progress has produced constructions based on {\em constant-degree graded encodings}. Still, our comprehension of such graded encodings remains limited, as the instantiations given so far have exhibited different vulnerabilities. Our main result is that, assuming LWE, {\em black-box constructions} of {\em sufficiently succinct} FE schemes from constant-degree graded encodings can be transformed to rely on a much better-understood object --- {\em bilinear groups}. In particular, under an {\em über assumption} on bilinear groups, such constructions imply IO in the plain model. The result demonstrates that the exact level of ciphertext succinctness of FE schemes is of major importance. In particular, we draw a fine line between known FE constructions from constant-degree graded encodings, which just fall short of the required succinctness, and the holy grail of basing IO on better-understood assumptions. In the heart of our result, are new techniques for removing ideal graded encoding oracles from FE constructions. Complementing the result, for weaker ideal models, namely the generic-group model and the random-oracle model, we show a transformation from {\em collusion-resistant} FE in either of the two models directly to FE (and IO) in the plain model, without assuming bilinear groups

    Perfect Structure on the Edge of Chaos

    Get PDF
    We construct trapdoor permutations based on (sub-exponential) indistinguishability obfuscation and one-way functions, thereby providing the first candidate that is not based on the hardness of factoring. Our construction shows that even highly structured primitives, such as trapdoor permutations, can be potentially based on hardness assumptions with noisy structures such as those used in candidate constructions of indistinguishability obfuscation. It also suggest a possible way to construct trapdoor permutations that resist quantum attacks, and that their hardness may be based on problems outside the complexity class SZK - indeed, while factoring-based candidates do not possess such security, future constructions of indistinguishability obfuscation might. As a corollary, we eliminate the need to assume trapdoor permutations and injective one-way function in many recent constructions based on indistinguishability obfuscation

    Non-interactive Universal Arguments

    Get PDF
    In 2002, Barak and Goldreich introduced the notion of a universal argument and constructed an interactive universal argument for non-deterministic computations based on polynomially hard collision-resistant hash functions. Since then, and especially in recent years, there have been tremendous developments in the construction of non-interactive succinct arguments for deterministic computations under standard hardness assumptions. However, the constructed succinct arguments can be proven universal only under sub-exponential assumptions. Assuming polynomially hard fully homomorphic encryption and a widely believed worst-case complexity assumption, we prove a general lifting theorem showing that all existing non-interactive succinct arguments can be made universal. The required complexity assumption is that non-uniformity does not allow arbitrary polynomial speedup. In the setting of uniform adversaries, this extra assumption is not needed
    corecore