5,864 research outputs found

    A Hardware Security Solution against Scan-Based Attacks

    Get PDF
    Scan based Design for Test (DfT) schemes have been widely used to achieve high fault coverage for integrated circuits. The scan technique provides full access to the internal nodes of the device-under-test to control them or observe their response to input test vectors. While such comprehensive access is highly desirable for testing, it is not acceptable for secure chips as it is subject to exploitation by various attacks. In this work, new methods are presented to protect the security of critical information against scan-based attacks. In the proposed methods, access to the circuit containing secret information via the scan chain has been severely limited in order to reduce the risk of a security breach. To ensure the testability of the circuit, a built-in self-test which utilizes an LFSR as the test pattern generator (TPG) is proposed. The proposed schemes can be used as a countermeasure against side channel attacks with a low area overhead as compared to the existing solutions in literature

    NEW METHODS FOR PSEUDOEXHAUSTIVE TESTING

    Get PDF
    Pseudoexhaustive testing of combinational circuits has become of great importance recently. These methods are keeping most of the benefits of the classical exhaustive testing which check every combination of the input signals, but they need a considerably shorter sequence of test patterns. In this paper we give a survey of pseudoexhaustive testing. Two new code construction methods are presented: a systematic procedure to generate an effective exhaustive code for every two dimensional subspace of the inputs; and an extension of the codes from the k dimensional space to k+1. The efficiency of the new methods is compared to the ones described in the literature

    Further Results on Quadratic Permutation Polynomial-Based Interleavers for Turbo Codes

    Full text link
    An interleaver is a critical component for the channel coding performance of turbo codes. Algebraic constructions are of particular interest because they admit analytical designs and simple, practical hardware implementation. Also, the recently proposed quadratic permutation polynomial (QPP) based interleavers by Sun and Takeshita (IEEE Trans. Inf. Theory, Jan. 2005) provide excellent performance for short-to-medium block lengths, and have been selected for the 3GPP LTE standard. In this work, we derive some upper bounds on the best achievable minimum distance dmin of QPP-based conventional binary turbo codes (with tailbiting termination, or dual termination when the interleaver length N is sufficiently large) that are tight for larger block sizes. In particular, we show that the minimum distance is at most 2(2^{\nu +1}+9), independent of the interleaver length, when the QPP has a QPP inverse, where {\nu} is the degree of the primitive feedback and monic feedforward polynomials. However, allowing the QPP to have a larger degree inverse may give strictly larger minimum distances (and lower multiplicities). In particular, we provide several QPPs with an inverse degree of at least three for some of the 3GPP LTE interleaver lengths giving a dmin with the 3GPP LTE constituent encoders which is strictly larger than 50. For instance, we have found a QPP for N=6016 which gives an estimated dmin of 57. Furthermore, we provide the exact minimum distance and the corresponding multiplicity for all 3GPP LTE turbo codes (with dual termination) which shows that the best minimum distance is 51. Finally, we compute the best achievable minimum distance with QPP interleavers for all 3GPP LTE interleaver lengths N <= 4096, and compare the minimum distance with the one we get when using the 3GPP LTE polynomials.Comment: Submitted to IEEE Trans. Inf. Theor

    Testing of leakage current failure in ASIC devices exposed to total ionizing dose environment using design for testability techniques

    Get PDF
    Due to the advancements in technology, electronic devices have been relied upon to operate under harsh conditions. Radiation is one of the main causes of different failures of the electronics devices. According to the operation environment, the sources of the radiation can be terrestrial or extra-terrestrial. For terrestrial the devices can be used in nuclear reactors or biomedical devices where the radiation is man-made. While for the extra- terrestrial, the devices can be used in satellites, the international space station or spaceships, where the radiation comes from various sources like the Sun. According to the operation environment the effects of radiation differ. These effects falls under two categories, total ionizing dose effect (TID) and single event effects (SEEs). TID effects can be affect the delay and leakage current of CMOS circuits negatively. The affects can therefore hinder the integrated circuits\u27 operation. Before the circuits are used, particularly in critical radiation heavy applications like military and space, testing under radiation must be done to avoid any failures during operation. The standard in testing electronic devices is generating worst case test vectors (WCTVs) and under radiation using these vectors the circuits are tested. However, the generation of these WCTVs have been very challenging so this approach is rarely used for TIDs effects. Design for testability (DFT) have been widely used in the industry for digital circuits testing applications. DFT is usually used with automatic test patterns generation software to generate test vectors against fault models of manufacturer defects for application specific integrated circuit (ASIC.) However, it was never used to generate test vectors for leakage current testing induced in ASICs exposed to TID radiation environment. The purpose of the thesis is to use DFT to identify WCTVs for leakage current failures in sequential circuits for ASIC devices exposed to TID. A novel methodology was devised to identify these test vectors. The methodology is validated and compared to previous non DFT methods. The methodology is proven to overcome the limitation of previous methodologies

    Simple Quantum Error Correcting Codes

    Full text link
    Methods of finding good quantum error correcting codes are discussed, and many example codes are presented. The recipe C_2^{\perp} \subseteq C_1, where C_1 and C_2 are classical codes, is used to obtain codes for up to 16 information qubits with correction of small numbers of errors. The results are tabulated. More efficient codes are obtained by allowing C_1 to have reduced distance, and introducing sign changes among the code words in a systematic manner. This systematic approach leads to single-error correcting codes for 3, 4 and 5 information qubits with block lengths of 8, 10 and 11 qubits respectively.Comment: Submitted to Phys. Rev. A. in May 1996. 21 pages, no figures. Further information at http://eve.physics.ox.ac.uk/ASGhome.htm

    Input variable selection in time-critical knowledge integration applications: A review, analysis, and recommendation paper

    Get PDF
    This is the post-print version of the final paper published in Advanced Engineering Informatics. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.The purpose of this research is twofold: first, to undertake a thorough appraisal of existing Input Variable Selection (IVS) methods within the context of time-critical and computation resource-limited dimensionality reduction problems; second, to demonstrate improvements to, and the application of, a recently proposed time-critical sensitivity analysis method called EventTracker to an environment science industrial use-case, i.e., sub-surface drilling. Producing time-critical accurate knowledge about the state of a system (effect) under computational and data acquisition (cause) constraints is a major challenge, especially if the knowledge required is critical to the system operation where the safety of operators or integrity of costly equipment is at stake. Understanding and interpreting, a chain of interrelated events, predicted or unpredicted, that may or may not result in a specific state of the system, is the core challenge of this research. The main objective is then to identify which set of input data signals has a significant impact on the set of system state information (i.e. output). Through a cause-effect analysis technique, the proposed technique supports the filtering of unsolicited data that can otherwise clog up the communication and computational capabilities of a standard supervisory control and data acquisition system. The paper analyzes the performance of input variable selection techniques from a series of perspectives. It then expands the categorization and assessment of sensitivity analysis methods in a structured framework that takes into account the relationship between inputs and outputs, the nature of their time series, and the computational effort required. The outcome of this analysis is that established methods have a limited suitability for use by time-critical variable selection applications. By way of a geological drilling monitoring scenario, the suitability of the proposed EventTracker Sensitivity Analysis method for use in high volume and time critical input variable selection problems is demonstrated.E

    LEDAcrypt: QC-LDPC Code-Based Cryptosystems with Bounded Decryption Failure Rate

    Get PDF
    We consider the QC-LDPC code-based cryptosystems named LEDAcrypt, which are under consideration by NIST for the second round of the post-quantum cryptography standardization initiative. LEDAcrypt is the result of the merger of the key encapsulation mechanism LEDAkem and the public-key cryptosystem LEDApkc, which were submitted to the first round of the same competition. We provide a detailed quantification of the quantum and classical computational efforts needed to foil the cryptographic guarantees of these systems. To this end, we take into account the best known attacks that can be mounted against them employing both classical and quantum computers, and compare their computational complexities with the ones required to break AES, coherently with the NIST requirements. Assuming the original LEDAkem and LEDApkc parameters as a reference, we introduce an algorithmic optimization procedure to design new sets of parameters for LEDAcrypt. These novel sets match the security levels in the NIST call and make the C reference implementation of the systems exhibit significantly improved figures of merit, in terms of both running times and key sizes. As a further contribution, we develop a theoretical characterization of the decryption failure rate (DFR) of LEDAcrypt cryptosystems, which allows new instances of the systems with guaranteed low DFR to be designed. Such a characterization is crucial to withstand recent attacks exploiting the reactions of the legitimate recipient upon decrypting multiple ciphertexts with the same private key, and consequentially it is able to ensure a lifecycle of the corresponding key pairs which can be sufficient for the wide majority of practical purposes
    corecore