27 research outputs found

    Fuzzy Password-Authenticated Key Exchange

    Get PDF
    Consider key agreement by two parties who start out knowing a common secret (which we refer to as “pass-string”, a generalization of “password”), but face two complications: (1) the pass-string may come from a low-entropy distribution, and (2) the two parties’ copies of the pass-string may have some noise, and thus not match exactly. We provide the first efficient and general solutions to this problem that enable, for example, key agreement based on commonly used biometrics such as iris scans. The problem of key agreement with each of these complications individually has been well studied in literature. Key agreement from low-entropy shared pass-strings is achieved by password-authenticated key exchange (PAKE), and key agreement from noisy but high-entropy shared pass-strings is achieved by information-reconciliation protocols as long as the two secrets are “close enough.” However, the problem of key agreement from noisy low-entropy pass-strings has never been studied. We introduce (universally composable) fuzzy password-authenticated key exchange (fPAKE), which solves exactly this problem. fPAKE does not have any entropy requirements for the pass-strings, and enables secure key agreement as long as the two pass-strings are “close” for some notion of closeness. We also give two constructions. The first construction achieves our fPAKE definition for any (efficiently computable) notion of closeness, including those that could not be handled before even in the high-entropy setting. It uses Yao’s garbled circuits in a way that is only two times more costly than their use against semi-honest adversaries, but that guarantees security against malicious adversaries. The second construction is more efficient, but achieves our fPAKE definition only for pass-strings with low Hamming distance. It builds on very simple primitives: robust secret sharing and PAKE

    Reusable Garbled Circuit Implementation of AES to Avoid Power Analysis Attacks

    Get PDF
    Unintended side-channel leaks can be exploited by attackers and achieved quickly, and using relatively inexpensive equipment. Cloud providers aren’t equipped to provide assurances of security against such attacks. One most well-known and effective of the side-channel attack is on information leaked through power consumption. Differential Power Analysis (DPA) can extract a secret key by measuring the power used while a device is executing the any algorithm. This research explores the susceptibility of current implementations of Circuit Garbling to power analysis attacks and a simple variant to obfuscate functionality and randomize the power consumption reusing the garbling keys and the garbled gates. AES has been chosen as an example. The first task is to implement the garbled variants of basic logic gates in hardware (RTL design) using Circuit Garbling. The second task is to use the above created gates and create an RTL implementation of AES using Verilog HDL. The next task is to perform a Differential Power Analysis(DPA) on this circuit and evaluate its resilience to attack

    Oblivious Sensor Fusion via Secure Multi-Party Combinatorial Filter Evaluation

    Get PDF
    This thesis examines the problem of fusing data from several sensors, potentially distributed throughout an environment, in order to consolidate readings into a single coherent view. We consider the setting when sensor units do not wish others to know their specific sensor streams. Standard methods for handling this fusion make no guarantees about what a curious observer may learn. Motivated by applications where data sources may only choose to participate if given privacy guarantees, we introduce a fusion approach that limits what can be inferred. Our approach is to form an aggregate stream, oblivious to the underlying sensor data, and to evaluate a combinatorial filter on that stream. This is achieved via secure multi-party computational techniques built on cryptographic primitives, which we extend and apply to the problem of fusing discrete sensor signals. We prove that the extensions preserve security under the semi- honest adversary model. Though the approach enables several applications of potential interest, we specifically consider a target tracking case study as a running example. Finally, we also report on a basic, proof-of-concept implementation, demonstrating that it can operate in practice; which we report and analyze the (empirical) running times for components in the architecture, suggesting directions for future improvement

    Metal: A Metadata-Hiding File-Sharing System

    Get PDF
    File-sharing systems like Dropbox offer insufficient privacy because a compromised server can see the file contents in the clear. Although encryption can hide such contents from the servers, metadata leakage remains significant. The goal of our work is to develop a file-sharing system that hides metadata---including user identities and file access patterns. Metal is the first file-sharing system that hides such metadata from malicious users and that has a latency of only a few seconds. The core of Metal consists of a new two-server multi-user oblivious RAM (ORAM) scheme, which is secure against malicious users, a metadata-hiding access control protocol, and a capability sharing protocol. Compared with the state-of-the-art malicious-user file-sharing scheme PIR-MCORAM (Maffei et al.\u2717), which does not hide user identities, Metal hides the user identities and is 500x faster (in terms of amortized latency) or 10^5x faster (in terms of worst-case latency)

    Perfect Implementation

    Get PDF
    Privacy and trust aect our strategic thinking, yet they have not been precisely modeled in mechanism design. In settings of incomplete information, traditional implementations of a normal-form mechanism - by disregarding the players' privacy, or assuming trust in a mediator - may fail to reach the mechanism's objectives. We thus investigate implementations of a new type. We put forward the notion of a perfect implementation of a normal-form mechanism M: in essence, a concrete extensive-form mechanism exactly preserving all strategic properties of M, without relying on a trusted mediator or violating the privacy of the players. We prove that any normal-form mechanism can be perfectly implemented by a verifiable mediator using envelopes and an envelope-randomizing device (i.e., the same tools used for running fair lotteries or tallying secret votes). Differently from a trusted mediator, a veriable one only performs prescribed public actions, so that everyone can verify that he is acting properly, and that he never learns any information that should remain private
    corecore