272,755 research outputs found

    Sequential Circuit Design for Embedded Cryptographic Applications Resilient to Adversarial Faults

    Get PDF
    In the relatively young field of fault-tolerant cryptography, the main research effort has focused exclusively on the protection of the data path of cryptographic circuits. To date, however, we have not found any work that aims at protecting the control logic of these circuits against fault attacks, which thus remains the proverbial Achilles’ heel. Motivated by a hypothetical yet realistic fault analysis attack that, in principle, could be mounted against any modular exponentiation engine, even one with appropriate data path protection, we set out to close this remaining gap. In this paper, we present guidelines for the design of multifault-resilient sequential control logic based on standard Error-Detecting Codes (EDCs) with large minimum distance. We introduce a metric that measures the effectiveness of the error detection technique in terms of the effort the attacker has to make in relation to the area overhead spent in implementing the EDC. Our comparison shows that the proposed EDC-based technique provides superior performance when compared against regular N-modular redundancy techniques. Furthermore, our technique scales well and does not affect the critical path delay

    Design Of Fountain Codes With Error Control

    Get PDF
    This thesis is focused on providing unequal error protection (uep) to two disjoint sources which are communicating to a comdestination via a comrelay by using distributed lt codes over a binary erasure channel (bec), and designing fountain codes with error control property by integrating lt codes with turbo codes over a binary input additive white gaussian noise (bi-awgn) channel. A simple yet efficient technique for decomposing the rsd into two entirely different degree distributions is developed and presented in this thesis. These two distributions are used to encode data symbols at the sources and the encoded symbols from the sources are selectively xored at the relay based on a suitable relay operation before the combined codeword is transmitted to the destination. By doing so, it is shown that the uep can be provided to these sources. The performance of lt codes over the awgn channel is well studied and presented in this thesis which indicates that these codes have weak error correction ability over the channel. But, errors introduced into individual symbols during the transmission of information over noisy channels need correction by some error correcting codes. Since it is found that lt codes alone are weak at correcting those errors, lt codes are integrated with turbo codes which are good error correcting codes. Therefore, the source data (symbols) are at first turbo encoded and then lt encoded and transmitted over the awgn channel. When the corrupted encoded symbols are received at receiver, lt decoding is conducted folloby turbo decoding. The overall performance of the integrated system is studied and presented in this thesis, which suggests that the errors left after lt decoding can be corrected to some extent by turbo decoder

    Bit flipping decoding for binary product codes

    Get PDF
    Error control coding has been used to mitigate the impact of noise on the wireless channel. Today, wireless communication systems have in their design Forward Error Correction (FEC) techniques to help reduce the amount of retransmitted data. When designing a coding scheme, three challenges need to be addressed, the error correcting capability of the code, the decoding complexity of the code and the delay introduced by the coding scheme. While it is easy to design coding schemes with a large error correcting capability, it is a challenge finding decoding algorithms for these coding schemes. Generally increasing the length of a block code increases its error correcting capability and its decoding complexity. Product codes have been identified as a means to increase the block length of simpler codes, yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to implement decoding algorithm. Research has generally been focused on improving bit flipping decoding for Low Density Parity Check codes. In this study we develop a new decoding algorithm based on syndrome checking and bit flipping to use for binary product codes, to address the major challenge of coding systems, i.e., developing codes with a large error correcting capability yet have a low decoding complexity. Simulated results show that the proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P. Elias in BER and more significantly in WER performance. The algorithm offers comparable complexity to the conventional algorithm in the Rayleigh fading channel

    Block Turbo Code and its Application to OFDM for Wireless Local Area Network

    Get PDF
    To overcome multipath fading and Inter symbol Interference (ISI), in convolutional single carrier systems equalizers are used. But it increases the system complexity. Another approach is to use a multicarrier modulation technique such as OFDM, where the data stream to be transmitted is divided into several lower rate data streams each being modulated on a subcarrier. To avoid ISI, a small interval, known as the guard time interval, is inserted into OFDM symbols. The length of the guard time interval is chosen to exceed the channel delay spread. Therefore, OFDM can combat the multipath fading and eliminate ISI almost completely. The another problem is the reduction of the error rate in transmitting digital data. For that we use error correcting Codes in the design of digital transmission systems. Turbo Codes have been widely considered to be the most powerful error control code of practical importance. Turbo codes can be achieved by serial or parallel concatenation of two (or more) codes called the constituent codes. The constituent codes can be either block codes or convolutional codes. Currently, most of the work on turbo codes have essentially focused on Convolutional Turbo Code (CTC)s and Block Turbo Code (BTC)s have been partially neglected. Yet, the BTC solution is more attractive for a wide range of applications. In this paper, Block Turbo Codes or Turbo Product Codes are used which is similar to the IEEE 802.11a WLAN standard. In this thesis work simple explanation of BTCOFDM theory is given. The BER performance is evaluated for the Block Turbo coded BPSK and QPSK OFDM system, under both AWGN channel and Rayleigh fading channel. It also compares the BER performance of Block Turbo coded OFDM with the uncoded OFDM. It is verified in the present work that the BTCOFDM system with 4 iterations is sufficient to provide a good BER performance. Additional number of iterations does not show noticeable difference. The simulation results shows that the BTCOFDM system achieves large coding gain with lower BER performance and reduced decoding iterations, therefore offering higher data rate in wireless mobile communications

    Advanced channel coding for space mission telecommand links

    Full text link
    We investigate and compare different options for updating the error correcting code currently used in space mission telecommand links. Taking as a reference the solutions recently emerged as the most promising ones, based on Low-Density Parity-Check codes, we explore the behavior of alternative schemes, based on parallel concatenated turbo codes and soft-decision decoded BCH codes. Our analysis shows that these further options can offer similar or even better performance.Comment: 5 pages, 7 figures, presented at IEEE VTC 2013 Fall, Las Vegas, USA, Sep. 2013 Proc. IEEE Vehicular Technology Conference (VTC 2013 Fall), ISBN 978-1-6185-9, Las Vegas, USA, Sep. 201

    Cryptographic techniques used to provide integrity of digital content in long-term storage

    Get PDF
    The main objective of the project was to obtain advanced mathematical methods to guarantee the verification that a required level of data integrity is maintained in long-term storage. The secondary objective was to provide methods for the evaluation of data loss and recovery. Additionally, we have provided the following initial constraints for the problem: a limitation of additional storage space, a minimal threshold for desired level of data integrity and a defined probability of a single-bit corruption. With regard to the main objective, the study group focused on the exploration methods based on hash values. It has been indicated that in the case of tight constraints, suggested by PWPW, it is not possible to provide any method based only on the hash values. This observation stems from the fact that the high probability of bit corruption leads to unacceptably large number of broken hashes, which in turn stands in contradiction with the limitation for additional storage space. However, having loosened the initial constraints to some extent, the study group has proposed two methods that use only the hash values. The first method, based on a simple scheme of data subdivision in disjoint subsets, has been provided as a benchmark for other methods discussed in this report. The second method ("hypercube" method), introduced as a type of the wider class of clever-subdivision methods, is built on the concept of rewriting data-stream into a n-dimensional hypercube and calculating hash values for some particular (overlapping) sections of the cube. We have obtained interesting results by combining hash value methods with error-correction techniques. The proposed framework, based on the BCH codes, appears to have promising properties, hence further research in this field is strongly recommended. As a part of the report we have also presented features of secret sharing methods for the benefit of novel distributed data-storage scenarios. We have provided an overview of some interesting aspects of secret sharing techniques and several examples of possible applications

    A Multi-Kernel Multi-Code Polar Decoder Architecture

    Get PDF
    Polar codes have received increasing attention in the past decade, and have been selected for the next generation of wireless communication standard. Most research on polar codes has focused on codes constructed from a 2×22\times2 polarization matrix, called binary kernel: codes constructed from binary kernels have code lengths that are bound to powers of 22. A few recent works have proposed construction methods based on multiple kernels of different dimensions, not only binary ones, allowing code lengths different from powers of 22. In this work, we design and implement the first multi-kernel successive cancellation polar code decoder in literature. It can decode any code constructed with binary and ternary kernels: the architecture, sized for a maximum code length NmaxN_{max}, is fully flexible in terms of code length, code rate and kernel sequence. The decoder can achieve frequency of more than 11 GHz in 6565 nm CMOS technology, and a throughput of 615615 Mb/s. The area occupation ranges between 0.110.11 mm2^2 for Nmax=256N_{max}=256 and 2.012.01 mm2^2 for Nmax=4096N_{max}=4096. Implementation results show an unprecedented degree of flexibility: with Nmax=4096N_{max}=4096, up to 5555 code lengths can be decoded with the same hardware, along with any kernel sequence and code rate
    corecore