26 research outputs found

    Black-box use of One-way Functions is Useless for Optimal Fair Coin-Tossing

    Get PDF
    A two-party fair coin-tossing protocol guarantees output delivery to the honest party even when the other party aborts during the protocol execution. Cleve (STOC--1986) demonstrated that a computationally bounded fail-stop adversary could alter the output distribution of the honest party by (roughly) 1/r1/r (in the statistical distance) in an rr-message coin-tossing protocol. An optimal fair coin-tossing protocol ensures that no adversary can alter the output distribution beyond 1/r1/r. In a seminal result, Moran, Naor, and Segev (TCC--2009) constructed the first optimal fair coin-tossing protocol using (unfair) oblivious transfer protocols. Whether the existence of oblivious transfer protocols is a necessary hardness of computation assumption for optimal fair coin-tossing remains among the most fundamental open problems in theoretical cryptography. The results of Impagliazzo and Luby (FOCS–1989) and Cleve and Impagliazzo (1993) prove that optimal fair coin-tossing implies the necessity of one-way functions\u27 existence; a significantly weaker hardness of computation assumption compared to the existence of secure oblivious transfer protocols. However, the sufficiency of the existence of one-way functions is not known. Towards this research endeavor, our work proves a black-box separation of optimal fair coin-tossing from the existence of one-way functions. That is, the black-box use of one-way functions cannot enable optimal fair coin-tossing. Following the standard Impagliazzo and Rudich (STOC--1989) approach of proving black-box separations, our work considers any rr-message fair coin-tossing protocol in the random oracle model where the parties have unbounded computational power. We demonstrate a fail-stop attack strategy for one of the parties to alter the honest party\u27s output distribution by 1/r1/\sqrt r by making polynomially-many additional queries to the random oracle. As a consequence, our result proves that the rr-message coin-tossing protocol of Blum (COMPCON--1982) and Cleve (STOC--1986), which uses one-way functions in a black-box manner, is the best possible protocol because an adversary cannot change the honest party\u27s output distribution by more than 1/r1/\sqrt r. Several previous works, for example, Dachman--Soled, Lindell, Mahmoody, and Malkin (TCC--2011), Haitner, Omri, and Zarosim (TCC--2013), and Dachman--Soled, Mahmoody, and Malkin (TCC--2014), made partial progress on proving this black-box separation assuming some restrictions on the coin-tossing protocol. Our work diverges significantly from these previous approaches to prove this black-box separation in its full generality. The starting point is the recently introduced potential-based inductive proof techniques for demonstrating large gaps in martingales in the information-theoretic plain model. Our technical contribution lies in identifying a global invariant of communication protocols in the random oracle model that enables the extension of this technique to the random oracle model

    Une classification des hypothèses calculatoire dans le modèle du groupe algébrique

    Get PDF
    International audiencea We give a taxonomy of computational assumptions in the algebraic group model (AGM). We first analyze Boyen's Uber assumption family for bilinear groups and then extend it in several ways to cover assumptions as diverse as Gap Diffie-Hellman and LRSW. We show that in the AGM every member of these families is implied by the q-discrete logarithm (DL) assumption, for some q that depends on the degrees of the polynomials defining the Uber assumption. Using the meta-reduction technique, we then separate (q + 1)-DL from q-DL, which yields a classification of all members of the extended Uber-assumption families. We finally show that there are strong assumptions, such as one-more DL, that provably fall outside our classification, by proving that they cannot be reduced from q-DL even in the AGM

    Applications of artificial neural networks (ANNs) in several different materials research fields

    Get PDF
    PhDIn materials science, the traditional methodological framework is the identification of the composition-processing-structure-property causal pathways that link hierarchical structure to properties. However, all the properties of materials can be derived ultimately from structure and bonding, and so the properties of a material are interrelated to varying degrees. The work presented in this thesis, employed artificial neural networks (ANNs) to explore the correlations of different material properties with several examples in different fields. Those including 1) to verify and quantify known correlations between physical parameters and solid solubility of alloy systems, which were first discovered by Hume-Rothery in the 1930s. 2) To explore unknown crossproperty correlations without investigating complicated structure-property relationships, which is exemplified by i) predicting structural stability of perovskites from bond-valence based tolerance factors tBV, and predicting formability of perovskites by using A-O and B-O bond distances; ii) correlating polarizability with other properties, such as first ionization potential, melting point, heat of vaporization and specific heat capacity. 3) In the process of discovering unanticipated relationships between combination of properties of materials, ANNs were also found to be useful for highlighting unusual data points in handbooks, tables and databases that deserve to have their veracity inspected. By applying this method, massive errors in handbooks were found, and a systematic, intelligent and potentially automatic method to detect errors in handbooks is thus developed. Through presenting these four distinct examples from three aspects of ANN capability, different ways that ANNs can contribute to progress in materials science has been explored. These approaches are novel and deserve to be pursued as part of the newer methodologies that are beginning to underpin material research

    Non-Interactive Key Exchange and Key Assignment Schemes

    Get PDF

    Distributed Protocols with Threshold and General Trust Assumptions

    Get PDF
    Distributed systems today power almost all online applications. Consequently, a wide range of distributed protocols, such as consensus, and distributed cryptographic primitives are being researched and deployed in practice. This thesis addresses multiple aspects of distributed protocols and cryptographic schemes, enhancing their resilience, efficiency, and scalability. Fundamental to every secure distributed protocols are its trust assumptions. These assumptions not only measure a protocol's resilience but also determine its scope of application, as well as, in some sense, the expressiveness and freedom of the participating parties. Dominant in practice is so far the threshold setting, where at most some f out of the n parties may fail in any execution. However, in this setting, all parties are viewed as identical, making correlations indescribable. These constraints can be surpassed with general trust assumptions, which allow arbitrary sets of parties to fail in an execution. Despite significant theoretical efforts, relevant practical aspects of this setting are yet to be addressed. Our work fills this gap. We show how general trust assumptions can be efficiently specified, encoded, and used in distributed protocols and cryptographic schemes. Additionally, we investigate a consensus protocol and distributed cryptographic schemes with general trust assumptions. Moreover, we show how the general trust assumptions of different systems, with intersecting or disjoint sets of participants, can be composed into a unified system. When it comes to decentralized systems, such as blockchains, efficiency and scalability are often compromised due to the total ordering of all user transactions. Guerraoui (Distributed Computing, 2022) have contradicted the common design of major blockchains, proving that consensus is not required to prevent double-spending in a cryptocurrency. Modern blockchains support a variety of distributed applications beyond cryptocurrencies, which let users execute arbitrary code in a distributed and decentralized fashion. In this work we explore the synchronization requirements of a family of Ethereum smart contracts and formally establish the subsets of participants that need to synchronize their transactions. Moreover, a common requirement of all asynchronous consensus protocols is randomness. A simple and efficient approach is to employ threshold cryptography for this. However, this necessitates in practice a distributed setup protocol, often leading to performance bottlenecks. Blum (TCC 2020) propose a solution bypassing this requirement, which is, however, practically inefficient, due to the employment of fully homomorphic encryption. Recognizing that randomness for consensus does not need to be perfect (that is, always unpredictable and agreed-upon) we propose a practical and concretely-efficient protocol for randomness generation. Lastly, this thesis addresses the issue of deniability in distributed systems. The problem arises from the fact that a digital signature authenticates a message for an indefinite period. We introduce a scheme that allows the recipients to verify signatures, while allowing plausible deniability for signers. This scheme transforms a polynomial commitment scheme into a digital signature scheme

    Impossibility on Tamper-Resilient Cryptography with Uniqueness Properties

    Get PDF
    In this work, we show negative results on the tamper-resilience of a wide class of cryptographic primitives with uniqueness properties, such as unique signatures, verifiable random functions, signatures with unique keys, injective one-way functions, and encryption schemes with a property we call unique-message property. Concretely, we prove that for these primitives, it is impossible to derive their (even extremely weak) tamper-resilience from any common assumption, via black-box reductions. Our proofs exploit the simulatable attack paradigm proposed by Wichs (ITCS ’13), and the tampering model we treat is the plain model, where there is no trusted setup

    Heavy flavor interactions and spectroscopy from lattice quantum chromodynamics

    Get PDF
    In the present work, spectroscopy and interactions of hadrons containing heavy quarks is investigated. In particular, a focus is placed on properties of exotic heavy hadronic states, including doubly and triply heavy baryons and doubly heavy tetraquark states. The framework in which these calculations are carried out is provided by lattice quantum chromodynamics, a discrete formulation of the modern theory of the strong interaction. The main body of the thesis had two main project focuses. For the first project, an extensive calculation of the mass spectrum of doubly and triply heavy baryons including both charm and bottom quarks is carried out. The wide range of quark masses in these systems require that the various flavors of quarks be treated with different lattice actions. We use domain wall fermions for 2+1 flavors (up down and strange) of sea and valence quarks, a relativistic heavy quark action for the charm quarks, and non-relativistic QCD for the heavier bottom quarks. The calculation of the ground state spectrum is presented and compared to recent models. For the second project, the interaction potential of two heavy-light mesons in lattice QCD is used to study the existence of tetraquark bound states. The interaction potential of the tetraquark system is calculated on the lattice with 2+1 flavors of dynamical fermions with lattice interpolating fields constructed using colorwave propagators. These propagators provide a method for constructing all-to-all spatially smeared the interpolating fields, a technique which allows for a better overlap with the ground state wavefunction as well as reduced statistical noise. Potentials are extracted for 24 distinct channels, and are fit with a phenomenological non-relativistic quark model potential, from which a determination of the existence of bound states is made via numerical solution of the two body radial Schrodinger equation

    IST Austria Thesis

    Get PDF
    Many security definitions come in two flavors: a stronger “adaptive” flavor, where the adversary can arbitrarily make various choices during the course of the attack, and a weaker “selective” flavor where the adversary must commit to some or all of their choices a-priori. For example, in the context of identity-based encryption, selective security requires the adversary to decide on the identity of the attacked party at the very beginning of the game whereas adaptive security allows the attacker to first see the master public key and some secret keys before making this choice. Often, it appears to be much easier to achieve selective security than it is to achieve adaptive security. A series of several recent works shows how to cleverly achieve adaptive security in several such scenarios including generalized selective decryption [Pan07][FJP15], constrained PRFs [FKPR14], and Yao’s garbled circuits [JW16]. Although the above works expressed vague intuition that they share a common technique, the connection was never made precise. In this work we present a new framework (published at Crypto ’17 [JKK+17a]) that connects all of these works and allows us to present them in a unified and simplified fashion. Having the framework in place, we show how to achieve adaptive security for proxy re-encryption schemes (published at PKC ’19 [FKKP19]) and provide the first adaptive security proofs for continuous group key agreement protocols (published at S&P ’21 [KPW+21]). Questioning optimality of our framework, we then show that currently used proof techniques cannot lead to significantly better security guarantees for "graph-building" games (published at TCC ’21 [KKPW21a]). These games cover generalized selective decryption, as well as the security of prominent constructions for constrained PRFs, continuous group key agreement, and proxy re-encryption. Finally, we revisit the adaptive security of Yao’s garbled circuits and extend the analysis of Jafargholi and Wichs in two directions: While they prove adaptive security only for a modified construction with increased online complexity, we provide the first positive results for the original construction by Yao (published at TCC ’21 [KKP21a]). On the negative side, we prove that the results of Jafargholi and Wichs are essentially optimal by showing that no black-box reduction can provide a significantly better security bound (published at Crypto ’21 [KKPW21c])
    corecore