22 research outputs found

    Silver: Silent VOLE and Oblivious Transfer from Hardness of Decoding Structured LDPC Codes

    Get PDF
    We put forth new protocols for oblivious transfer extension and vector OLE, called \emph{Silver}, for SILent Vole and oblivious transfER. Silver offers extremely high performances: generating 10 million random OTs on one core of a standard laptop requires only 300ms of computation and 122KB of communication. This represents 37% less computation and ~1300x less communication than the standard IKNP protocol, as well as ~4x less computation and ~4x less communication than the recent protocol of Yang et al. (CCS 2020). Silver is \emph{silent}: after a one-time cheap interaction, two parties can store small seeds, from which they can later \emph{locally} generate a large number of OTs \emph{while remaining offline}. Neither IKNP nor Yang et al. enjoys this feature; compared to the best known silent OT extension protocol of Boyle et al. (CCS 2019), upon which we build up, Silver has 19x less computation, and the same communication. Due to its attractive efficiency features, Silver yields major efficiency improvements in numerous MPC protocols. Our approach is a radical departure from the standard paradigm for building MPC protocols, in that we do \emph{not} attempt to base our constructions on a well-studied assumption. Rather, we follow an approach closer in spirit to the standard paradigm in the design of symmetric primitives: we identify a set of fundamental structural properties that allow us to withstand all known attacks, and put forth a candidate design, guided by our analysis. We also rely on extensive experimentations to analyze our candidate and experimentally validate their properties. In essence, our approach boils down to constructing new families of linear codes with (plausibly) high minimum distance and extremely low encoding time. While further analysis is of course warranted to confidently assess the security of Silver, we hope and believe that initiating this approach to the design of MPC primitives will pave the way to new secure primitives with extremely attractive efficiency features

    Design of serially-concatenated LDGM codes

    Get PDF
    [Resumen] Since Shannon demonstrated in 1948 the feasibility of achieving an arbitrarily low error probability in a communications system provided that the transmission rate was kept below a certain limit, one of the greatest challenges in the realm of digital communications and, more specifically, in the channel coding field, has been finding codes that are able to approach this limit as much as possible with a reasonable encoding and decoding complexity, However, it was not until 1993, when Berrou et al. presented the turbo codes, that a coding scheme capable of performing at less than 1dB from Shannon's limit with an extremely low error probability was found. The idea on which these codes are based is the iterative decoding of concatenated components that exchange information about the transmitted bits, which is known as the "turbo principle". The generalization of this idea led in 1995 to the rediscovery of LDPC (Low Density Parity Check) codes, proposed for the first time by Gallager in the 60s. LDPC codes are linear block codes with a sparse parity check matrix that are able to surpass the performance of turbo codes with a smaller decoding complexity. However, due to the fact that the generator matrix of general LDPC codes is not sparse, their encoding complexity can be excessively high. LDGM (Low Density Generator Matrix) codes, a particular case of LDPC codes, are codes with a sparse generator matrix, thanks to which they present a lower encoding complexity. However, except for the case of very high rate codes, LDGM codes are "bad", i.e., they have a non-zero error probability that is independent of the code block length. More recently, IRA (Irregular Repeat-Accumulated) codes, consisting of the serial concatenation of a LDGM code and an accumulator, have been proposed. IRA codes are able to get close to the performance of LDPC codes with an encoding complexity similar to that of LDGM codes. In this thesis we explore an alternative to IRA codes consisting in the serial concatenation of two LDGM codes, a scheme that we will denote SCLDGM (Serially-Concatenated Low-Density Generator Matrix). The basic premise of SCLDGM codes is that an inner code of rate close to the desired transmission rate fixes most of the errors, and an external code of rate close to one corrects the few errors that result from decoding the inner code. For any of these schemes to perform as close as possible to the capacity limit it is necessary to determine the code parameters that best fit the channel over which the transmission will be done. The two techniques most commonly used in the literature to optimize LDPC codes are Density Evolution (DE) and EXtrinsic Information Transfer (EXIT) charts, which have been employed to obtain optimized codes that perform at a few tenths of a decibel of the AWGN channel capacity. However, no optimization techniques have been presented for SCLDGM codes, which so far have been designed heuristically and therefore their performance is far from the performance achieved by IRA and LDPC codes. Other of the most important advances that have occurred in recent years is the utilization of multiple antennas at the trasmitter and the receiver, which is known as MIMO (Multiple-Input Multiple-Output) systems. Telatar showed that the channel capacity in these kind of systems scales linearly with the minimum number of transmit and receive antennas, which enables us to achieve spectral efficiencies far greater than with systems with a single transmit and receive antenna (or Single Input Single Output (SISO) systems). This important advantage has attracted a lot of attention from the research community, and has caused that many of the new standards, such as WiMax 802.16e or WiFi 802.11n, as well as future 4G systems are based on MIMO systems. The main problem of MIMO systems is the high complexity of optimum detection, which grows exponentially with the number of transmit antennas and the number of modulation levels. Several suboptimum algorithms have been proposed to reduce this complexity, most notably the SIC-MMSE (Soft-Interference Cancellation Minimum Mean Square Error) and spherical detectors. Another major issue is the high complexity of the channel estimation, due to the large number of coefficients which determine it. There are techniques, such as Maximum-Likelihood-Expectation-Maximization (ML-EM), that have been successfully applied to estimate MIMO channels but, as in the case of detection, they suffer from the problem of a very high complexity when the number of transmit antennas or the size of the constellation increase. The main objective of this work is the study and optimization of SCLDGM codes in SISO and MIMO channels. To this end, we propose an optimization method for SCLDGM codes based on EXIT charts that allow these codes to exceed the performance of IRA codes existing in the literature and get close to the performance of LDPC codes, with the advantage over the latter of a lower encoding complexity. We also propose optimized SCLDGM codes for both spherical and SIC-MMSE suboptimal MIMO detectors, constituting a system that is capable of approaching the capacity limits of MIMO channels with a low complexity encoding, detection and decoding. We analyze the BICM (Bit-Interleaved Coded Modulation) scheme and the concatenation of SCLDGM codes with Space-Time Codes (STC) in ergodic and quasi-static MIMO channels. Furthermore, we explore the combination of these codes with different channel estimation algorithms that will take advantage of the low complexity of the suboptimum detectors to reduce the complexity of the estimation process while keeping a low distance to the capacity limit. Finally, we propose coding schemes for low rates involving the serial concatenation of several LDGM codes, reducing the complexity of recently proposed schemes based on Hadamard codes

    Towards practical linear optical quantum computing

    Get PDF
    Quantum computing promises a new paradigm of computation where information is processed in a way that has no classical analogue. There are a number of physical platforms conducive to quantum computation, each with a number of advantages and challenges. Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. Their low decoherence rates make them particularly favourable, however the inability to perform deterministic two-qubit gates and the issue of photon loss are challenges that need to be overcome. In this thesis we explore the construction of a linear optical quantum computer based on the cluster state model. We identify the different necessary stages: state preparation, cluster state construction and implementation of quantum error correcting codes, and address the challenges that arise in each of these stages. For the state preparation, we propose a series of linear optical circuits for the generation of small entangled states, assessing their performance under different scenarios. For the cluster state construction, we introduce a ballistic scheme which not only consumes an order of magnitude fewer resources than previously proposed schemes, but also benefits from a natural loss tolerance. Based on this scheme, we propose a full architectural blueprint with fixed physical depth. We make investigations into the resource efficiency of this architecture and propose a new multiplexing scheme which optimises the use of resources. Finally, we study the integration of quantum error-correcting codes in the linear optical scheme proposed and suggest three ways in which the linear optical scheme can be made fault-tolerant.Open Acces

    Security and Prioritization in Multiple Access Relay Networks

    Get PDF
    In this work, we considered a multiple access relay network and investigated the following three problems: 1- Tradeoff between reliability and security under falsified data injection attacks; 2-Prioritized analog relaying; 3- mitigation of Forwarding Misbehaviors in Multiple access relay network. In the first problem, we consider a multiple access relay network where multiple sources send independent data to a single destination through multiple relays which may inject a falsified data into the network. To detect the malicious relays and discard (erase) data from them, tracing bits are embedded in the information data at each source node. Parity bits may be also added to correct the errors caused by fading and noise. When the total amount of redundancy, tracing bits plus parity bits, is fixed, an increase in parity bits to increase the reliability requires a decrease in tracing bits which leads to a less accurate detection of malicious behavior of relays, and vice versa. We investigate the tradeoff between the tracing bits and the parity bits in minimizing the probability of decoding error and maximizing the throughput in multi-source, multi-relay networks under falsified data injection attacks. The energy and throughput gains provided by the optimal allocation of redundancy and the tradeoff between reliability and security are analyzed. In the second problem, we consider a multiple access relay network where multiple sources send independent data simultaneously to a common destination through multiple relay nodes. We present three prioritized analog cooperative relaying schemes that provide different class of service (CoS) to different sources while being relayed at the same time in the same frequency band. The three schemes take the channel variations into account in determining the relay encoding (combining) rule, but differ in terms of whether or how relays cooperate. Simulation results on the symbol error probability and outage probability are provided to show the effectiveness of the proposed schemes. In the third problem, we propose a physical layer approach to detect the relay node that injects false data or adds channel errors into the network encoder in multiple access relay networks. The misbehaving relay is detected by using the maximum a posteriori (MAP) detection rule which is optimal in the sense of minimizing the probability of incorrect decision (false alarm and miss detection). The proposed scheme does not require sending extra bits at the source, such as hash function or message authentication check bits, and hence there is no transmission overhead. The side information regarding the presence of forwarding misbehavior is exploited at the decoder to enhance the reliability of decoding. We derive the probability of false alarm and miss detection and the probability of bit error, taking into account the lossy nature of wireless links

    Air Interface for Next Generation Mobile Communication Networks: Physical Layer Design:A LTE-A Uplink Case Study

    Get PDF

    Architectures multi-Asip pour turbo récepteur flexible

    Get PDF
    Rapidly evolving wireless standards use modern techniques such as turbo codes, Bit Interleaved coded Modulation (BICM), high order QAM constellation, Signal Space Diversity (SSD), Multi-Input Multi-Output (MIMO) Spatial Multiplexing (SM) and Space Time Codes (STC) with different parameters for reliable high rate data transmissions. Adoption of such techniques in the transmitter can impact the receiver architecture in three ways: (1) the complex processing related to advanced techniques such as turbo codes, encourage to perform iterative processing in the receiver to improve error rate performance (2) to satisfy high throughput requirement for an iterative receiver, parallel processing is mandatory and finally (3) to allow the support of different techniques and parameters imposed, programmable yet high throughput hardware processing elements are required. In this thesis, to address the high throughput requirement with turbo processing, first of all a study of parallelism on turbo decoding is extended for turbo demodulation and turbo equalization. Based on the results acquired from the parallelism study a flexible high throughput heterogeneous multi-ASIP NoC based unified turbo receiver is proposed. The proposed architecture fulfils the target requirements in a way that: (a) Application Specific Instruction-set Processor (ASIP) exploits metric generation level parallelism and implements the required flexibility, (b) throughputs beyond the capacity of single ASIP in a turbo process are achieved through multiple ASIP elements implementing sub-block parallelism and shuffled processing and finally (c) Network on Chip is used to handle communication conflicts during parallel processing of multiple ASIPs. In pursuit to achieve a hardware model of the proposed architecture two ASIPs are conceived where the first one, namely EquASIP, is dedicated for MMSE-IC equalization and provides a flexible solution for multiple MIMO techniques adopted in multiple wireless standards with a capability to work in turbo equalization context. The second ASIP, named as DemASIP, is a flexible demapper which can be used in MIMO or single antenna environment for any modulation till 256-QAM with or without iterative demodulation. Using available TurbASIP and NoC components, the thesis concludes on an FPGA prototype of heterogeneous multi-ASIP NoC based unified turbo receiver which integrates 9 instances of 3 different ASIPs with 2 NoCs.Les normes de communication sans fil, sans cesse en évolution, imposent l'utilisation de techniques modernes telles que les turbocodes, modulation codée à entrelacement bit (BICM), constellation MAQ d'ordre élevé, diversité de constellation (SSD), multiplexage spatial et codage espace-temps multi-antennes (MIMO) avec des paramètres différents pour des transmissions fiables et de haut débit. L'adoption de ces techniques dans l'émetteur peut influencer l'architecture du récepteur de trois façons: (1) les traitement complexes relatifs aux techniques avancées comme les turbocodes, encourage à effectuer un traitement itératif dans le récepteur pour améliorer la performance en termes de taux d'erreur (2) pour satisfaire l'exigence de haut débit avec un récepteur itératif, le recours au parallélisme est obligatoire et enfin (3) pour assurer le support des différentes techniques et paramètres imposées, des processeurs de traitement matériel flexibles, mais aussi de haute performance, sont nécessaires. Dans cette thèse, pour répondre aux besoins de haut débit dans un contexte de traitement itératif, tout d'abord une étude de parallélisme sur le turbo décodage a été étendue aux applications de turbo démodulation et turbo égalisation. Partant des résultats obtenus à partir de l'étude du parallélisme, un récepteur itératif unifié basé sur un modèle d'architecture multi-ASIP hétérogène intégrant un réseau sur puce (NoC) a été proposé. L'architecture proposée répond aux exigences visées d'une manière où: (a) le concept de processeur à jeu d'instruction dédié à l'application (ASIP) exploite le parallélisme du niveau de génération de métriques et met en oeuvre la flexibilité nécessaire, (b) les débits au-delà de la capacité d'un seul ASIP dans un processus itératif sont obtenus au moyen de multiples ASIP implémentant le parallélisme de sous-blocs et le traitement combiné et enfin (c) le concept de réseau sur puce (NoC) est utilisé pour gérer les conflits de communication au cours du traitement parallèle itératif multi-ASIP. Dans le but de parvenir à un modèle matériel de l'architecture proposée, deux ASIP ont été conçus où le premier, nommé EquASIP, est dédié à l'égalisation MMSE-IC et fournit une solution flexible pour de multiples techniques multi-antennes adoptés dans plusieurs normes sans fil avec la capacité de travailler dans un contexte de turbo égalisation. Le deuxième ASIP, nommé DemASIP, est un démappeur flexible qui peut être utilisé dans un environnement multi-antennes et pour tout type de modulation jusqu'à MAQ-256 avec ou sans démodulation itérative. En intégrant ces ASIP, en plus des NoC et TurbASIP disponibles à Télécom Bretagne, la thèse conclut sur un prototype FPGA d'un récepteur itératif unifié multi-ASIP qui intègre 9 coeurs de 3 différents types d'ASIP avec 2 NoC

    Hardware Architectures for Post-Quantum Cryptography

    Get PDF
    The rapid development of quantum computers poses severe threats to many commonly-used cryptographic algorithms that are embedded in different hardware devices to ensure the security and privacy of data and communication. Seeking for new solutions that are potentially resistant against attacks from quantum computers, a new research field called Post-Quantum Cryptography (PQC) has emerged, that is, cryptosystems deployed in classical computers conjectured to be secure against attacks utilizing large-scale quantum computers. In order to secure data during storage or communication, and many other applications in the future, this dissertation focuses on the design, implementation, and evaluation of efficient PQC schemes in hardware. Four PQC algorithms, each from a different family, are studied in this dissertation. The first hardware architecture presented in this dissertation is focused on the code-based scheme Classic McEliece. The research presented in this dissertation is the first that builds the hardware architecture for the Classic McEliece cryptosystem. This research successfully demonstrated that complex code-based PQC algorithm can be run efficiently on hardware. Furthermore, this dissertation shows that implementation of this scheme on hardware can be easily tuned to different configurations by implementing support for flexible choices of security parameters as well as configurable hardware performance parameters. The successful prototype of the Classic McEliece scheme on hardware increased confidence in this scheme, and helped Classic McEliece to get recognized as one of seven finalists in the third round of the NIST PQC standardization process. While Classic McEliece serves as a ready-to-use candidate for many high-end applications, PQC solutions are also needed for low-end embedded devices. Embedded devices play an important role in our daily life. Despite their typically constrained resources, these devices require strong security measures to protect them against cyber attacks. Towards securing this type of devices, the second research presented in this dissertation focuses on the hash-based digital signature scheme XMSS. This research is the first that explores and presents practical hardware based XMSS solution for low-end embedded devices. In the design of XMSS hardware, a heterogenous software-hardware co-design approach was adopted, which combined the flexibility of the soft core with the acceleration from the hard core. The practicability and efficiency of the XMSS software-hardware co-design is further demonstrated by providing a hardware prototype on an open-source RISC-V based System-on-a-Chip (SoC) platform. The third research direction covered in this dissertation focuses on lattice-based cryptography, which represents one of the most promising and popular alternatives to today\u27s widely adopted public key solutions. Prior research has presented hardware designs targeting the computing blocks that are necessary for the implementation of lattice-based systems. However, a recurrent issue in most existing designs is that these hardware designs are not fully scalable or parameterized, hence limited to specific cryptographic primitives and security parameter sets. The research presented in this dissertation is the first that develops hardware accelerators that are designed to be fully parameterized to support different lattice-based schemes and parameters. Further, these accelerators are utilized to realize the first software-harware co-design of provably-secure instances of qTESLA, which is a lattice-based digital signature scheme. This dissertation demonstrates that even demanding, provably-secure schemes can be realized efficiently with proper use of software-hardware co-design. The final research presented in this dissertation is focused on the isogeny-based scheme SIKE, which recently made it to the final round of the PQC standardization process. This research shows that hardware accelerators can be designed to offload compute-intensive elliptic curve and isogeny computations to hardware in a versatile fashion. These hardware accelerators are designed to be fully parameterized to support different security parameter sets of SIKE as well as flexible hardware configurations targeting different user applications. This research is the first that presents versatile hardware accelerators for SIKE that can be mapped efficiently to both FPGA and ASIC platforms. Based on these accelerators, an efficient software-hardwareco-design is constructed for speeding up SIKE. In the end, this dissertation demonstrates that, despite being embedded with expensive arithmetic, the isogeny-based SIKE scheme can be run efficiently by exploiting specialized hardware. These four research directions combined demonstrate the practicability of building efficient hardware architectures for complex PQC algorithms. The exploration of efficient PQC solutions for different hardware platforms will eventually help migrate high-end servers and low-end embedded devices towards the post-quantum era
    corecore