205 research outputs found

    Formal Verification of Arithmetic Masking in Hardware and Software

    Get PDF
    Masking is a popular secret-sharing technique that is used to protect cryptographic implementations against physical attacks like differential power analysis. So far, most research in this direction has focused on finding efficient Boolean masking schemes for well-known symmetric cryptographic algorithms like AES and Keccak. However, especially with the advent of post-quantum cryptography (PQC), arithmetic masking has received increasing attention from the research community. In practice, many PQC algorithms require a combination of arithmetic and Boolean masking, which makes the search for secure and efficient conversion algorithms between these domains (A2B/B2A) an interesting but very challenging research topic. While there already exist lots of tools that can help with the formal verification of Boolean masked implementations, the same cannot be said about arithmetic masking and accompanying mask conversion algorithms. In this work, we demonstrate the first formal verification approach for (any-order) Boolean and arithmetic masking which can be applied to both hardware and software, while considering side-effects such as glitches and transitions. First, we show how a formal verification approach for Boolean masking can be used in the context of arithmetic masking such that we can verify A2B/B2A conversions for arbitrary masking orders. We investigate various conversion algorithms in hardware and software, and point out several new findings such as glitch-based issues for straightforward implementations of [CGV14]-A2B in hardware, transition-based leakage in Goubin-A2B in software, and more general implementation pitfalls when utilizing common optimization techniques in PQC. We provide the first formal analysis of table-based A2Bs from a probing security perspective and point out that they might not be easy to implement securely on processors that use of memory buffers or caches

    Planck intermediate results. XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth

    Get PDF
    This paper describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducing significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. In a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    A resolved analysis of cold dust and gas in the nearby edge-on spiral NGC 891

    Get PDF
    We investigate the connection between dust and gas in the nearby edge-on spiral galaxy NGC 891. High resolution Herschel PACS and SPIRE 70, 100, 160, 250, 350, and 500 μ\mum images are combined with JCMT SCUBA 850 μ\mum observations to trace the far-infrared/submillimetre spectral energy distribution (SED). Maps of the HI 21 cm line and CO(J=3-2) emission trace the atomic and molecular hydrogen gas, respectively. We fit one-component modified blackbody models to the integrated SED, finding a global dust mass of 8.5×\times107^{7} M_{\odot} and an average temperature of 23±\pm2 K. We also fit the pixel-by-pixel SEDs to produce maps of the dust mass and temperature. The dust mass distribution correlates with the total stellar population as traced by the 3.6 μ\mum emission. The derived dust temperature, which ranges from approximately 17 to 24 K, is found to correlate with the 24 μ\mum emission. Allowing the dust emissivity index to vary, we find an average value of β\beta = 1.9±\pm0.3. We confirm an inverse relation between the dust emissivity spectral index and dust temperature, but do not observe any variation of this relationship with vertical height from the mid-plane of the disk. A comparison of the dust properties with the gaseous components of the ISM reveals strong spatial correlations between the surface mass densities of dust and the molecular hydrogen and total gas surface densities. Observed asymmetries in the dust temperature, and the H2_{2}-to-dust and total gas-to-dust ratios hint that an enhancement in the star formation rate may be the result of larger quantities of molecular gas available to fuel star formation in the NE compared to the SW. Whilst the asymmetry likely arises from dust obscuration due to the geometry of the line-of-sight projection of the spiral arms, we cannot exclude an enhancement in the star formation rate in the NE side of the disk.Comment: Accepted for publication in A&A. 21 pages, including 13 figures and 4 table

    Systematic Characterization of Power Side Channel Attacks for Residual and Added Vulnerabilities

    Get PDF
    Power Side Channel Attacks have continued to be a major threat to cryptographic devices. Hence, it will be useful for designers of cryptographic systems to systematically identify which type of power Side Channel Attacks their designs remain vulnerable to after implementation. It’s also useful to determine which additional vulnerabilities they have exposed their devices to, after the implementation of a countermeasure or a feature. The goal of this research is to develop a characterization of power side channel attacks on different encryption algorithms\u27 implementations to create metrics and methods to evaluate their residual vulnerabilities and added vulnerabilities. This research studies the characteristics that influence the power side leakage, classifies them, and identifies both the residual vulnerabilities and the added vulnerabilities. Residual vulnerabilities are defined as the traits that leave the implementation of the algorithm still vulnerable to power Side Channel Attacks (SCA), sometimes despite the attempt at implementing countermeasures by the designers. Added vulnerabilities to power SCA are defined as vulnerabilities created or enhanced by the algorithm implementations and/or modifications. The three buckets in which we categorize the encryption algorithm implementations are: i. Countermeasures against power side channel attacks, ii. IC power delivery network impact to power leakage (including voltage regulators), iii. Lightweight ciphers and applications for the Internet of Things (IoT ) From the characterization of masking countermeasures, an example outcome developed is that masking schemes, when uniformly distributed random masks are used, are still vulnerable to collision power attacks. Another example outcome derived is that masked AES, when glitches occur, is still vulnerable to Differential Power Analysis (DPA). We have developed a characterization of power side-channel attacks on the hardware implementations of different symmetric encryption algorithms to provide a detailed analysis of the effectiveness of state-of-the-art countermeasures against local and remote power side-channel attacks. The characterization is accomplished by studying the attributes that influence power side-channel leaks, classifying them, and identifying both residual vulnerabilities and added vulnerabilities. The evaluated countermeasures include masking, hiding, and power delivery network scrambling. But, vulnerability to DPA depends largely on the quality of the leaked power, which is impacted by the characteristics of the device power delivery network. Countermeasures and deterrents to power side-channel attacks targeting the alteration or scrambling of the power delivery network have been shown to be effective against local attacks where the malicious agent has physical access to the target system. However, remote attacks that capture the leaked information from within the IC power grid are shown herein to be nonetheless effective at uncovering the secret key in the presence of these countermeasures/deterrents. Theoretical studies and experimental analysis are carried out to define and quantify the impact of integrated voltage regulators, voltage noise injection, and integration of on-package decoupling capacitors for both remote and local attacks. An outcome yielded by the studies is that the use of an integrated voltage regulator as a countermeasure is effective for a local attack. However, remote attacks are still effective and hence break the integrated voltage regulator countermeasure. From experimental analysis, it is observed that within the range of designs\u27 practical values, the adoption of on-package decoupling capacitors provides only a 1.3x increase in the minimum number of traces required to discover the secret key. However, the injection of noise in the IC power delivery network yields a 37x increase in the minimum number of traces to discover. Thus, increasing the number of on-package decoupling capacitors or the impedance between the local probing site and the IC power grid should not be relied on as countermeasures to power side-channel attacks, for remote attack schemes. Noise injection should be considered as it is more effective at scrambling the leaked signal to eliminate sensitive identifying information. However, the analysis and experiments carried out herein are applied to regular symmetric ciphers which are not suitable for protecting Internet of Things (IoT) devices. The protection of communications between IoT devices is of great concern because the information exchanged contains vital sensitive data. Malicious agents seek to exploit those data to extract secret information about the owners or the system. Power side channel attacks are of great concern on these devices because their power consumption unintentionally leaks information correlatable to the device\u27s secret data. Several studies have demonstrated the effectiveness of authenticated encryption with advanced data (AEAD), in protecting communications with these devices. In this research, we have proposed a comprehensive evaluation of the ten algorithm finalists of the National Institute of Standards and Technology (NIST) IoT lightweight cipher competition. The study shows that, nonetheless, some still present some residual vulnerabilities to power side channel attacks (SCA). For five ciphers, we propose an attack methodology as well as the leakage function needed to perform correlation power analysis (CPA). We assert that Ascon, Sparkle, and PHOTON-Beetle security vulnerability can generally be assessed with the security assumptions Chosen ciphertext attack and leakage in encryption only, with nonce-misuse resilience adversary (CCAmL1) and Chosen ciphertext attack and leakage in encryption only with nonce-respecting adversary (CCAL1) , respectively. However, the security vulnerability of GIFT-COFB, Grain, Romulus, and TinyJambu can be evaluated more straightforwardly with publicly available leakage models and solvers. They can also be assessed simply by increasing the number of traces collected to launch the attack

    Planck intermediate results. XLVI. Reduction of large-scale systematic effects in HFI polarization maps and estimation of the reionization optical depth

    Get PDF
    This paper describes the identification, modelling, and removal of previously unexplained systematic effects in the polarization data of the Planck High Frequency Instrument (HFI) on large angular scales, including new mapmaking and calibration procedures, new and more complete end-to-end simulations, and a set of robust internal consistency checks on the resulting maps. These maps, at 100, 143, 217, and 353 GHz, are early versions of those that will be released in final form later in 2016. The improvements allow us to determine the cosmic reionization optical depth τ using, for the first time, the low-multipole EE data from HFI, reducing significantly the central value and uncertainty, and hence the upper limit. Two different likelihood procedures are used to constrain τ from two estimators of the CMB E- and B-mode angular power spectra at 100 and 143 GHz, after debiasing the spectra from a small remaining systematic contamination. These all give fully consistent results. A further consistency test is performed using cross-correlations derived from the Low Frequency Instrument maps of the Planck 2015 data release and the new HFI data. For this purpose, end-to-end analyses of systematic effects from the two instruments are used to demonstrate the near independence of their dominant systematic error residuals. The tightest result comes from the HFI-based τ posterior distribution using the maximum likelihood power spectrum estimator from EE data only, giving a value 0.055 ± 0.009. In a companion paper these results are discussed in the context of the best-fit PlanckΛCDM cosmological model and recent models of reionization

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
    corecore