51 research outputs found

    Secure Computation using Leaky Correlations (Asymptotically Optimal Constructions)

    Get PDF
    Most secure computation protocols can be effortlessly adapted to offload a significant fraction of their computationally and cryptographically expensive components to an offline phase so that the parties can run a fast online phase and perform their intended computation securely. During this offline phase, parties generate private shares of a sample generated from a particular joint distribution, referred to as the correlation. These shares, however, are susceptible to leakage attacks by adversarial parties, which can compromise the security of the entire secure computation protocol. The objective, therefore, is to preserve the security of the honest party despite the leakage performed by the adversary on her share. Prior solutions, starting with nn-bit leaky shares, either used 4 messages or enabled the secure computation of only sub-linear size circuits. Our work presents the first 2-message secure computation protocol for 2-party functionalities that have Θ(n)\Theta(n) circuit-size despite Θ(n)\Theta(n)-bits of leakage, a qualitatively optimal result. We compose a suitable 2-message secure computation protocol in parallel with our new 2-message correlation extractor. Correlation extractors, introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai (FOCS--2009) as a natural generalization of privacy amplification and randomness extraction, recover ``fresh\u27\u27 correlations from the leaky ones, which are subsequently used by other cryptographic protocols. We construct the first 2-message correlation extractor that produces Θ(n)\Theta(n)-bit fresh correlations even after Θ(n)\Theta(n)-bit leakage. Our principal technical contribution, which is of potential independent interest, is the construction of a family of multiplication-friendly linear secret sharing schemes that is simultaneously a family of small-bias distributions. We construct this family by randomly ``twisting then permuting\u27\u27 appropriate Algebraic Geometry codes over constant-size fields

    TipTrap: A Co-located Direct Manipulation Technique for Acoustically Levitated Content

    Get PDF
    Acoustic levitation has emerged as a promising approach for mid-air displays, by using multiple levitated particles as 3D voxels, cloth and thread props, or high-speed tracer particles, under the promise of creating 3D displays that users can see, hear and feel with their bare eyes, ears and hands. However, interaction with this mid-air content always occurred at a distance, since external objects in the display volume (e.g. user's hands) can disturb the acoustic fields and make the particles fall. This paper proposes TipTrap, a co-located direct manipulation technique for acoustically levitated particles. TipTrap leverages the reflection of ultrasound on the users' skin and employs a closed-loop system to create functional acoustic traps 2.1 mm below the fingertips, and addresses its 3 basic stages: selection, manipulation and deselection. We use Finite-Differences Time Domain (FDTD) simulations to explain the principles enabling TipTrap, and explore how finger reflections and user strategies influence the quality of the traps (e.g. approaching direction, orientation and tracking errors), and use these results to design our technique. We then implement the technique, characterizing its performance with a robotic hand setup and finish with an exploration of the ability of TipTrap to manipulate different types of levitated content

    The Price of Low Communication in Secure Multi-Party Computation

    Get PDF
    Traditional protocols for secure multi-party computation among n parties communicate at least a linear (in n) number of bits, even when computing very simple functions. In this work we investigate the feasibility of protocols with sublinear communication complexity. Concretely, we consider two clients, one of which may be corrupted, who wish to perform some “small” joint computation using n servers but without any trusted setup. We show that enforcing sublinear communication complexity drastically affects the feasibility bounds on the number of corrupted parties that can be tolerated in the setting of information-theoretic security. We provide a complete investigation of security in the presence of semi-honest adversaries---static and adaptive, with and without erasures---and initiate the study of security in the presence of malicious adversaries. For semi-honest static adversaries, our bounds essentially match the corresponding bounds when there is no communication restriction---i.e., we can tolerate up to t < (1/2 - \epsilon)n corrupted parties. For the adaptive case, however, the situation is different. We prove that without erasures even a small constant fraction of corruptions is intolerable, and---more surprisingly---when erasures are allowed, we prove that t < (1- \sqrt(0.5) -\epsilon)n corruptions can be tolerated, which we also show to be essentially optimal. The latter optimality proof hinges on a new treatment of probabilistic adversary structures that may be of independent interest. In the case of active corruptions in the sublinear communication setting, we prove that static “security with abort” is feasible when t < (1/2 - \epsilon)n, namely, the bound that is tight for semi-honest security. All of our negative results in fact rule out protocols with sublinear message complexity

    Leakage-resilience of the Shamir Secret-sharing Scheme against Physical-bit Leakages

    Get PDF
    Efficient Reed-Solomon code reconstruction algorithms, for example, by Guruswami and Wootters (STOC--2016), translate into local leakage attacks on Shamir secret-sharing schemes over characteristic-2 fields. However, Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO--2018) showed that the Shamir secret sharing scheme over prime-fields is leakage resilient to one-bit local leakage if the reconstruction threshold is roughly 0.87 times the total number of parties. In several application scenarios, like secure multi-party multiplication, the reconstruction threshold must be at most half the number of parties. Furthermore, the number of leakage bits that the Shamir secret sharing scheme is resilient to is also unclear. Towards this objective, we study the Shamir secret-sharing scheme\u27s leakage-resilience over a prime-field FF. The parties\u27 secret-shares, which are elements in the finite field FF, are naturally represented as λ\lambda-bit binary strings representing the elements {0,1,,p1}\{0,1,\dotsc,p-1\}. In our leakage model, the adversary can independently probe mm bit-locations from each secret share. The inspiration for considering this leakage model stems from the impact that the study of oblivious transfer combiners had on general correlation extraction algorithms, and the significant influence of protecting circuits from probing attacks has on leakage-resilient secure computation. Consider arbitrary reconstruction threshold k2k\geq 2, physical bit-leakage parameter m1m\geq 1, and the number of parties n1n\geq 1. We prove that Shamir\u27s secret-sharing scheme with random evaluation places is leakage-resilient with high probability when the order of the field FF is sufficiently large; ignoring polylogarithmic factors, one needs to ensure that \log \abs F \geq n/k. Our result, excluding polylogarithmic factors, states that Shamir\u27s scheme is secure as long as the total amount of leakage mnm\cdot n is less than the entropy kλk\cdot\lambda introduced by the Shamir secret-sharing scheme. Note that our result holds even for small constant values of the reconstruction threshold kk, which is essential to several application scenarios. To complement this positive result, we present a physical-bit leakage attack for m=1m=1 physical bit-leakage from n=kn=k secret shares and any prime-field FF satisfying \abs F=1\mod k. In particular, there are (roughly) \abs F^{n-k+1} such vulnerable choices for the nn-tuple of evaluation places. We lower-bound the advantage of this attack for small values of the reconstruction threshold, like k=2k=2 and k=3k=3, and any \abs F=1\mod k. In general, we present a formula calculating our attack\u27s advantage for every kk as \abs F\rightarrow\infty. Technically, our positive result relies on Fourier analysis, analytic properties of proper rank-rr generalized arithmetic progressions, and Bézout\u27s theorem to bound the number of solutions to an equation over finite fields. The analysis of our attack relies on determining the ``discrepancy\u27\u27 of the Irwin-Hall distribution. A probability distribution\u27s discrepancy is a new property of distributions that our work introduces, which is of potential independent interest

    Network Oblivious Transfer

    Get PDF
    Motivated by the goal of improving the concrete efficiency of secure multiparty computation (MPC), we study the possibility of implementing an infrastructure for MPC. We propose an infrastructure based on oblivious transfer (OT), which would consist of OT channels between some pairs of parties in the network. We devise information-theoretically secure protocols that allow additional pairs of parties to establish secure OT correlations using the help of other parties in the network in the presence of a dishonest majority. Our main technical contribution is an upper bound that matches a lower bound of Harnik, Ishai, and Kushilevitz (Crypto 2007), who studied the number of OT channels necessary and sufficient for MPC. In particular, we characterize which n-party OT graphs G allow t-secure computation of OT correlations between all pairs of parties, showing that this is possible if and only if the complement of G does not contain the complete bipartite graph K_{n-t,n-t} as a subgraph

    One-Time Programs from Commodity Hardware

    Get PDF
    One-time programs, originally formulated by Goldwasser et al. [CRYPTO\u2708], are a powerful cryptographic primitive with compelling applications. Known solutions for one-time programs, however, require specialized secure hardware that is not widely available (or, alternatively, access to blockchains and very strong cryptographic tools). In this work we investigate the possibility of realizing one-time programs from a recent and now more commonly available hardware functionality: the counter lockbox. A counter lockbox is a stateful functionality that protects an encryption key under a user-specified password, and enforces a limited number of incorrect guesses. Counter lockboxes have become widely available in consumer devices and cloud platforms. We show that counter lockboxes can be used to realize one-time programs for general functionalities. We develop a number of techniques to reduce the number of counter lockboxes required for our constructions, that may be of independent interest

    Transmit and Receive Signal Processing for MIMO Terrestrial Broadcast Systems

    Full text link
    [EN] Multiple-Input Multiple-Output (MIMO) technology in Digital Terrestrial Television (DTT) networks has the potential to increase the spectral efficiency and improve network coverage to cope with the competition of limited spectrum use (e.g., assignment of digital dividend and spectrum demands of mobile broadband), the appearance of new high data rate services (e.g., ultra-high definition TV - UHDTV), and the ubiquity of the content (e.g., fixed, portable, and mobile). It is widely recognised that MIMO can provide multiple benefits such as additional receive power due to array gain, higher resilience against signal outages due to spatial diversity, and higher data rates due to the spatial multiplexing gain of the MIMO channel. These benefits can be achieved without additional transmit power nor additional bandwidth, but normally come at the expense of a higher system complexity at the transmitter and receiver ends. The final system performance gains due to the use of MIMO directly depend on physical characteristics of the propagation environment such as spatial correlation, antenna orientation, and/or power imbalances experienced at the transmit aerials. Additionally, due to complexity constraints and finite-precision arithmetic at the receivers, it is crucial for the overall system performance to carefully design specific signal processing algorithms. This dissertation focuses on transmit and received signal processing for DTT systems using MIMO-BICM (Bit-Interleaved Coded Modulation) without feedback channel to the transmitter from the receiver terminals. At the transmitter side, this thesis presents investigations on MIMO precoding in DTT systems to overcome system degradations due to different channel conditions. At the receiver side, the focus is given on design and evaluation of practical MIMO-BICM receivers based on quantized information and its impact in both the in-chip memory size and system performance. These investigations are carried within the standardization process of DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) the handheld evolution of DVB-T2 (Terrestrial - Second Generation), and ATSC 3.0 (Advanced Television Systems Committee - Third Generation), which incorporate MIMO-BICM as key technology to overcome the Shannon limit of single antenna communications. Nonetheless, this dissertation employs a generic approach in the design, analysis and evaluations, hence, the results and ideas can be applied to other wireless broadcast communication systems using MIMO-BICM.[ES] La tecnología de múltiples entradas y múltiples salidas (MIMO) en redes de Televisión Digital Terrestre (TDT) tiene el potencial de incrementar la eficiencia espectral y mejorar la cobertura de red para afrontar las demandas de uso del escaso espectro electromagnético (e.g., designación del dividendo digital y la demanda de espectro por parte de las redes de comunicaciones móviles), la aparición de nuevos contenidos de alta tasa de datos (e.g., ultra-high definition TV - UHDTV) y la ubicuidad del contenido (e.g., fijo, portable y móvil). Es ampliamente reconocido que MIMO puede proporcionar múltiples beneficios como: potencia recibida adicional gracias a las ganancias de array, mayor robustez contra desvanecimientos de la señal gracias a la diversidad espacial y mayores tasas de transmisión gracias a la ganancia por multiplexado del canal MIMO. Estos beneficios se pueden conseguir sin incrementar la potencia transmitida ni el ancho de banda, pero normalmente se obtienen a expensas de una mayor complejidad del sistema tanto en el transmisor como en el receptor. Las ganancias de rendimiento finales debido al uso de MIMO dependen directamente de las características físicas del entorno de propagación como: la correlación entre los canales espaciales, la orientación de las antenas y/o los desbalances de potencia sufridos en las antenas transmisoras. Adicionalmente, debido a restricciones en la complejidad y aritmética de precisión finita en los receptores, es fundamental para el rendimiento global del sistema un diseño cuidadoso de algoritmos específicos de procesado de señal. Esta tesis doctoral se centra en el procesado de señal, tanto en el transmisor como en el receptor, para sistemas TDT que implementan MIMO-BICM (Bit-Interleaved Coded Modulation) sin canal de retorno hacia el transmisor desde los receptores. En el transmisor esta tesis presenta investigaciones en precoding MIMO en sistemas TDT para superar las degradaciones del sistema debidas a diferentes condiciones del canal. En el receptor se presta especial atención al diseño y evaluación de receptores prácticos MIMO-BICM basados en información cuantificada y a su impacto tanto en la memoria del chip como en el rendimiento del sistema. Estas investigaciones se llevan a cabo en el contexto de estandarización de DVB-NGH (Digital Video Broadcasting - Next Generation Handheld), la evolución portátil de DVB-T2 (Second Generation Terrestrial), y ATSC 3.0 (Advanced Television Systems Commitee - Third Generation) que incorporan MIMO-BICM como clave tecnológica para superar el límite de Shannon para comunicaciones con una única antena. No obstante, esta tesis doctoral emplea un método genérico tanto para el diseño, análisis y evaluación, por lo que los resultados e ideas pueden ser aplicados a otros sistemas de comunicación inalámbricos que empleen MIMO-BICM.[CA] La tecnologia de múltiples entrades i múltiples eixides (MIMO) en xarxes de Televisió Digital Terrestre (TDT) té el potencial d'incrementar l'eficiència espectral i millorar la cobertura de xarxa per a afrontar les demandes d'ús de l'escàs espectre electromagnètic (e.g., designació del dividend digital i la demanda d'espectre per part de les xarxes de comunicacions mòbils), l'aparició de nous continguts d'alta taxa de dades (e.g., ultra-high deffinition TV - UHDTV) i la ubiqüitat del contingut (e.g., fix, portàtil i mòbil). És àmpliament reconegut que MIMO pot proporcionar múltiples beneficis com: potència rebuda addicional gràcies als guanys de array, major robustesa contra esvaïments del senyal gràcies a la diversitat espacial i majors taxes de transmissió gràcies al guany per multiplexat del canal MIMO. Aquests beneficis es poden aconseguir sense incrementar la potència transmesa ni l'ample de banda, però normalment s'obtenen a costa d'una major complexitat del sistema tant en el transmissor com en el receptor. Els guanys de rendiment finals a causa de l'ús de MIMO depenen directament de les característiques físiques de l'entorn de propagació com: la correlació entre els canals espacials, l'orientació de les antenes, i/o els desequilibris de potència patits en les antenes transmissores. Addicionalment, a causa de restriccions en la complexitat i aritmètica de precisió finita en els receptors, és fonamental per al rendiment global del sistema un disseny acurat d'algorismes específics de processament de senyal. Aquesta tesi doctoral se centra en el processament de senyal tant en el transmissor com en el receptor per a sistemes TDT que implementen MIMO-BICM (Bit-Interleaved Coded Modulation) sense canal de tornada cap al transmissor des dels receptors. En el transmissor aquesta tesi presenta recerques en precoding MIMO en sistemes TDT per a superar les degradacions del sistema degudes a diferents condicions del canal. En el receptor es presta especial atenció al disseny i avaluació de receptors pràctics MIMO-BICM basats en informació quantificada i al seu impacte tant en la memòria del xip com en el rendiment del sistema. Aquestes recerques es duen a terme en el context d'estandardització de DVB-NGH (Digital Video Broadcasting - Next Generation Handheld), l'evolució portàtil de DVB-T2 (Second Generation Terrestrial), i ATSC 3.0 (Advanced Television Systems Commitee - Third Generation) que incorporen MIMO-BICM com a clau tecnològica per a superar el límit de Shannon per a comunicacions amb una única antena. No obstant açò, aquesta tesi doctoral empra un mètode genèric tant per al disseny, anàlisi i avaluació, per la qual cosa els resultats i idees poden ser aplicats a altres sistemes de comunicació sense fils que empren MIMO-BICM.Vargas Paredero, DE. (2016). Transmit and Receive Signal Processing for MIMO Terrestrial Broadcast Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/66081TESISPremiad

    LOITA: Lunar Optical/Infrared Telescope Array

    Get PDF
    LOITA (Lunar Optical/Infrared Telescope Array) is a lunar-based interferometer composed of 18 alt-azimuth telescopes arranged in a circular geometry. This geometry results in excellent uv coverage and allows baselines up to 5 km long. The angular resolution will be 25 micro-arcsec at 500 nm and the main spectral range of the array will be 200 to 1100 nm. For infrared planet detection, the spectral range may be extended to nearly 10 mu m. The telescope mirrors have a Cassegrain configuration using a 1.75 m diameter primary mirror and a 0.24 m diameter secondary mirror. A three-stage (coarse, intermediate, and fine) optical delay system, controlled by laser metrology, is used to equalize path lengths from different telescopes to within a few wavelengths. All instruments and the fine delay system are located within the instrument room. Upon exiting the fine delay system, all beams enter the beam combiner and are then directed to the various scientific instruments and detectors. The array instrumentation will consist of CCD detectors optimized for both the visible and infrared as well as specially designed cameras and spectrographs. For direct planet detection, a beam combiner employing achromatic nulling interferometry will be used to reduce star light (by several orders of magnitude) while passing the planet light. A single telescope will be capable of autonomous operation. This telescope will be equipped with four instruments: wide field and planetary camera, faint object camera, high resolution spectrograph, and faint object spectrograph. These instruments will be housed beneath the telescope. The array pointing and control system is designed to meet the fine pointing requirement of one micro-arcsec stability and to allow precise tracking of celestial objects for up to 12 days. During the lunar night, the optics and the detectors will be passively cooled to 70-80 K temperature. To maintain a continuous communication with the earth a relay satellite placed at the L4 libration point will be used in conjunction with the Advanced Tracking and Data Relay Satellite System (ATDRSS). Electrical power of about 10 kW will be supplied by a nuclear reactor based on the SP-100 technology. LOITA will be constructed in three phases of six telescopes each. The total mass of the first operational phase is estimated at 58,820 kg. The cost of the fully operational first phase of the observatory is estimated at $8.9 billion. LOITA's primary objectives will be to detect and characterize planets around nearby stars (up to ten parsec away), study physics of collapsed stellar objects, solar/stellar surface features and the processes in nuclear regions of galaxies and quasars. An interferometric array such as LOITA will be capable of achieving resolutions three orders of magnitude greater than Hubble's design goal. LOITA will also be able to maintain higher signal to noise ratios than are currently attainable due to long observation times available on the moon
    corecore