107 research outputs found

    Bloom Filters in Adversarial Environments

    Get PDF
    Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency and correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure. In this work, we consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we consider a data structure known as "Bloom filter" and prove a tight connection between Bloom filters in this model and cryptography. A Bloom filter represents a set SS of elements approximately, by using fewer bits than a precise representation. The price for succinctness is allowing some errors: for any xSx \in S it should always answer `Yes', and for any xSx \notin S it should answer `Yes' only with small probability. In the adversarial model, we consider both efficient adversaries (that run in polynomial time) and computationally unbounded adversaries that are only bounded in the number of queries they can make. For computationally bounded adversaries, we show that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist. For unbounded adversaries we show that there exists a Bloom filter for sets of size nn and error ε\varepsilon, that is secure against tt queries and uses only O(nlog1ε+t)O(n \log{\frac{1}{\varepsilon}}+t) bits of memory. In comparison, nlog1εn\log{\frac{1}{\varepsilon}} is the best possible under a non-adaptive adversary

    More Practical and Secure History-Independent Hash Tables

    Get PDF
    Direct-recording electronic (DRE) voting systems have been used in several countries including United States, India, and the Netherlands to name a few. In the majority of those cases, researchers discovered several security flaws in the implementation and architecture of the voting system. A common property of the machines inspected was that the votes were stored sequentially according to the time they were cast, which allows an attacker to break the anonymity of the voters using some side-channel information. Subsequent research (Molnar et al. Oakland’06, Bethencourt et al. NDSS’07, Moran et al. ICALP’07) pointed out the connection between vote storage and history-independence, a privacy property that guarantees that the system does not reveal the sequence of operations that led to the current state. There are two flavors of history-independence. In a weakly history-independent data structure, every possible sequence of operations consistent with the current set of items is equally likely to have occurred. In a strongly history-independent data structure, items must be stored in a canonical way, i.e., for any set of items, there is only one possible memory representation. Strong history- independence implies weak history-independence but considerably constrains the design choices of the data structures. In this work, we present and analyze an efficient hash table data structure that simultaneously achieves the following properties: • It is based on the classic linear probing collision-handling scheme. • It is weakly history-independent. • It is secure against collision-timing attacks. That is, we consider adversaries that can measure the time for an update operation, but cannot observe data values, and we show that those adversaries cannot learn information about the items in the table. • All operations are significantly faster in practice (in particular, almost 2x faster for high load factors) than those of the commonly used strongly history-independent linear probing method proposed by Blelloch and Golovin (FOCS’07), which is not secure against collision-timing attacks. The first property is desirable for ease of implementation. The second property is desirable for the sake of maximizing privacy in scenarios where the memory of the hash table is exposed, such as post-election audit of DRE voting machines or direct memory access (DMA) attacks. The third property is desirable for maximizing privacy against adversaries who do not have access to memory but nevertheless are capable of accurately measuring the execution times of data structure operations. To our knowledge, our hash table construction is the first data structure that combines history-independence and protection against a form of timing attacks

    Symmetrically and Asymmetrically Hard Cryptography

    Get PDF
    International audienceThe main efficiency metrics for a cryptographic primitive are its speed, its code size and its memory complexity. For a variety of reasons, many algorithms have been proposed that, instead of optimizing, try to increase one of these hardness forms.We present for the first time a unified framework for describing the hardness of a primitive along any of these three axes: code-hardness, time-hardness and memory-hardness. This unified view allows us to present modular block cipher and sponge constructions which can have any of the three forms of hardness and can be used to build any higher level symmetric primitive: hash function, PRNG, etc. We also formalize a new concept: asymmetric hardness. It creates two classes of users: common users have to compute a function with a certain hardness while users knowing a secret can compute the same function in a far cheaper way. Functions with such an asymmetric hardness can be directly used in both our modular structures, thus constructing any symmetric primitive with an asymmetric hardness. We also propose the first asymmetrically memory-hard function, Diodon.As illustrations of our framework, we introduce Whale and Skipper. Whale is a code-hard hash function which could be used as a key derivation function and Skipper is the first asymmetrically time-hard block cipher

    Virtualized Reconfigurable Resources and Their Secured Provision in an Untrusted Cloud Environment

    Get PDF
    The cloud computing business grows year after year. To keep up with increasing demand and to offer more services, data center providers are always searching for novel architectures. One of them are FPGAs, reconfigurable hardware with high compute power and energy efficiency. But some clients cannot make use of the remote processing capabilities. Not every involved party is trustworthy and the complex management software has potential security flaws. Hence, clients’ sensitive data or algorithms cannot be sufficiently protected. In this thesis state-of-the-art hardware, cloud and security concepts are analyzed and com- bined. On one side are reconfigurable virtual FPGAs. They are a flexible resource and fulfill the cloud characteristics at the price of security. But on the other side is a strong requirement for said security. To provide it, an immutable controller is embedded enabling a direct, confidential and secure transfer of clients’ configurations. This establishes a trustworthy compute space inside an untrusted cloud environment. Clients can securely transfer their sensitive data and algorithms without involving vulnerable software or a data center provider. This concept is implemented as a prototype. Based on it, necessary changes to current FPGAs are analyzed. To fully enable reconfigurable yet secure hardware in the cloud, a new hybrid architecture is required.Das Geschäft mit dem Cloud Computing wächst Jahr für Jahr. Um mit der steigenden Nachfrage mitzuhalten und neue Angebote zu bieten, sind Betreiber von Rechenzentren immer auf der Suche nach neuen Architekturen. Eine davon sind FPGAs, rekonfigurierbare Hardware mit hoher Rechenleistung und Energieeffizienz. Aber manche Kunden können die ausgelagerten Rechenkapazitäten nicht nutzen. Nicht alle Beteiligten sind vertrauenswürdig und die komplexe Verwaltungssoftware ist anfällig für Sicherheitslücken. Daher können die sensiblen Daten dieser Kunden nicht ausreichend geschützt werden. In dieser Arbeit werden modernste Hardware, Cloud und Sicherheitskonzept analysiert und kombiniert. Auf der einen Seite sind virtuelle FPGAs. Sie sind eine flexible Ressource und haben Cloud Charakteristiken zum Preis der Sicherheit. Aber auf der anderen Seite steht ein hohes Sicherheitsbedürfnis. Um dieses zu bieten ist ein unveränderlicher Controller eingebettet und ermöglicht eine direkte, vertrauliche und sichere Übertragung der Konfigurationen der Kunden. Das etabliert eine vertrauenswürdige Rechenumgebung in einer nicht vertrauenswürdigen Cloud Umgebung. Kunden können sicher ihre sensiblen Daten und Algorithmen übertragen ohne verwundbare Software zu nutzen oder den Betreiber des Rechenzentrums einzubeziehen. Dieses Konzept ist als Prototyp implementiert. Darauf basierend werden nötige Änderungen von modernen FPGAs analysiert. Um in vollem Umfang eine rekonfigurierbare aber dennoch sichere Hardware in der Cloud zu ermöglichen, wird eine neue hybride Architektur benötigt

    Cryptanalysis of an oblivious PRF from supersingular isogenies

    Get PDF
    We cryptanalyse the SIDH-based oblivious pseudorandom function from supersingular isogenies proposed at Asiacrypt’20 by Boneh, Kogan and Woo. To this end, we give an attack on an assumption, the auxiliary one-more assumption, that was introduced by Boneh et al. and we show that this leads to an attack on the oblivious PRF itself. The attack breaks the pseudorandomness as it allows adversaries to evaluate the OPRF without further interactions with the server after some initial OPRF evaluations and some offline computations. More specifically, we first propose a polynomial-time attack. Then, we argue it is easy to change the OPRF protocol to include some countermeasures, and present a second subexponential attack that succeeds in the presence of said countermeasures. Both attacks break the security parameters suggested by Boneh et al. Furthermore, we provide a proof of concept implementation as well as some timings of our attack. Finally, we examine the generation of one of the OPRF parameters and argue that a trusted third party is needed to guarantee provable security.SCOPUS: cp.kinfo:eu-repo/semantics/publishe

    A Comprehensive Protocol Suite for Secure Two-Party Computation

    Get PDF
    Turvaline ühisarvutus võimaldab üksteist mitte usaldavatel osapooltel teha arvutusi tundlikel andmetel nii, et kellegi privaatsed andmed ei leki teistele osapooltele. Sharemind on kaua arenduses olnud turvalise ühisarvutuse platvorm, mis jagab tundlikke andmeid ühissalastuse abil kolme serveri vahel. Sharemindi kolme osapoolega protokolle on kasutatud suuremahuliste rakenduste loomisel. Igapäevaelus leidub rakendusi, mille puhul kahe osapoolega juurustusmudel on kolme osapoolega variandist sobivam majanduslikel või organisatoorsetel põhjustel. Selles töös kirjeldame ja teostame täieliku protokollistiku kahe osapoolega turvaliste arvutuste jaoks. Loodud protokollistiku eesmärk on pakkuda kolme osapoolega juurutusmudelile võrdväärne alternatiiv, mis on ka jõudluses võrreldaval tasemel. Kahe osapoole vahelised turvalise aritmeetika protokollid tuginevad peamiselt Beaveri kolmikute ette arvutamisele. Selleks, et saavutada vajalikku jõudlust, oleme välja töötanud tõhusad ette arvutamise meetodid, mis kasutavad uudsel viisil N-sõnumi pimeedastuse pikendamise protokolle. Meie meetodite eeliseks on alternatiividest väiksem võrgusuhtluse maht. Töös käsitleme ka insenertehnilisi väljakutseid, mis selliste meetodite teostamisel ette tulid. Töös esitame kirjeldatud konstruktsioonide turvalisuse ja korrektsuse tõestused. Selleks kasutame vähem eelduseid, kui tüüpilised teaduskirjanduses leiduvad tõestused. Üheks peamiseks saavutuseks on juhusliku oraakli mudeli vätimine. Meie kirjeldatud ja teostatud täisarvuaritmeetika ja andmetüüpide vaheliste teisendusprotokollide jõudlustulemused on võrreldavad kolme osapoole protokollide jõudlusega. Meie töö tulemusena saab Sharemindi platvormil teostada kahe osapoolega turvalisi ühisarvutusi.Secure multi-party computation allows a number of distrusting parties to collaborate in extracting new knowledge from their joint private data, without any party learning the other participants' secrets in the process. The efficient and mature Sharemind secure computation platform has relied on a three-party suite of protocols based on secret sharing for supporting large real-world applications. However, in some scenarios, a two-party model is a better fit when no natural third party is involved in the application. In this work, we design and implement a full protocol suite for two-party computations on Sharemind, providing an alternative and viable solution in such cases. We aim foremost for efficiency that is on par with the existing three-party protocols. To this end, we introduce more efficient techniques for the precomputation of Beaver triples using oblivious transfer extension, as the two-party protocols for arithmetic fundamentally rely on efficient triple generation. We reduce communication costs compared to existing methods by using 1-out-of-N oblivious transfer extension in a novel way, and provide insights into engineering challenges for efficiently implementing these methods. Furthermore, we show security of our constructions using strictly weaker assumptions than have been previously required by avoiding the random oracle model. We describe and implement a large amount of integer operations and data conversion protocols that are competitive with the existing three-party protocols, providing an overall solid foundation for two-party computations on Sharemind

    Randomized Data Structures for the Dynamic Closest-Pair Problem

    No full text
    We describe a new randomized data structure, the {\em sparse partition}, for solving the dynamic closest-pair problem. Using this data structure the closest pair of a set of nn points in kk-dimensional space, for any fixed kk, can be found in constant time. If the points are chosen from a finite universe, and if the floor function is available at unit-cost, then the data structure supports insertions into and deletions from the set in expected O(logn)O(\log n) time and requires expected O(n)O(n) space. Here, it is assumed that the updates are chosen by an adversary who does not know the random choices made by the data structure. The data structure can be modified to run in O(log2n)O(\log^2 n) expected time per update in the algebraic decision tree model of computation. Even this version is more efficient than the currently best known deterministic algorithms for solving the problem for k>1k>1

    Using quantum key distribution for cryptographic purposes: a survey

    Full text link
    The appealing feature of quantum key distribution (QKD), from a cryptographic viewpoint, is the ability to prove the information-theoretic security (ITS) of the established keys. As a key establishment primitive, QKD however does not provide a standalone security service in its own: the secret keys established by QKD are in general then used by a subsequent cryptographic applications for which the requirements, the context of use and the security properties can vary. It is therefore important, in the perspective of integrating QKD in security infrastructures, to analyze how QKD can be combined with other cryptographic primitives. The purpose of this survey article, which is mostly centered on European research results, is to contribute to such an analysis. We first review and compare the properties of the existing key establishment techniques, QKD being one of them. We then study more specifically two generic scenarios related to the practical use of QKD in cryptographic infrastructures: 1) using QKD as a key renewal technique for a symmetric cipher over a point-to-point link; 2) using QKD in a network containing many users with the objective of offering any-to-any key establishment service. We discuss the constraints as well as the potential interest of using QKD in these contexts. We finally give an overview of challenges relative to the development of QKD technology that also constitute potential avenues for cryptographic research.Comment: Revised version of the SECOQC White Paper. Published in the special issue on QKD of TCS, Theoretical Computer Science (2014), pp. 62-8

    Succinct Filters for Sets of Unknown Sizes

    Get PDF
    The membership problem asks to maintain a set S ? [u], supporting insertions and membership queries, i.e., testing if a given element is in the set. A data structure that computes exact answers is called a dictionary. When a (small) false positive rate ? is allowed, the data structure is called a filter. The space usages of the standard dictionaries or filters usually depend on the upper bound on the size of S, while the actual set can be much smaller. Pagh, Segev and Wieder [Pagh et al., 2013] were the first to study filters with varying space usage based on the current |S|. They showed in order to match the space with the current set size n = |S|, any filter data structure must use (1-o(1))n(log(1/?)+(1-O(?))log log n) bits, in contrast to the well-known lower bound of N log(1/?) bits, where N is an upper bound on |S|. They also presented a data structure with almost optimal space of (1+o(1))n(log(1/?)+O(log log n)) bits provided that n > u^0.001, with expected amortized constant insertion time and worst-case constant lookup time. In this work, we present a filter data structure with improvements in two aspects: - it has constant worst-case time for all insertions and lookups with high probability; - it uses space (1+o(1))n(log (1/?)+log log n) bits when n > u^0.001, achieving optimal leading constant for all ? = o(1). We also present a dictionary that uses (1+o(1))nlog(u/n) bits of space, matching the optimal space in terms of the current size, and performs all operations in constant time with high probability
    corecore