10 research outputs found

    Radiation Tolerant Electronics, Volume II

    Get PDF
    Research on radiation tolerant electronics has increased rapidly over the last few years, resulting in many interesting approaches to model radiation effects and design radiation hardened integrated circuits and embedded systems. This research is strongly driven by the growing need for radiation hardened electronics for space applications, high-energy physics experiments such as those on the large hadron collider at CERN, and many terrestrial nuclear applications, including nuclear energy and safety management. With the progressive scaling of integrated circuit technologies and the growing complexity of electronic systems, their ionizing radiation susceptibility has raised many exciting challenges, which are expected to drive research in the coming decade.After the success of the first Special Issue on Radiation Tolerant Electronics, the current Special Issue features thirteen articles highlighting recent breakthroughs in radiation tolerant integrated circuit design, fault tolerance in FPGAs, radiation effects in semiconductor materials and advanced IC technologies and modelling of radiation effects

    Dynamic Laser Fault Injection Aided by Quiescent Photon Emissions in Embedded Microcontrollers: Apparatus, Methodology and Attacks

    Get PDF
    Internet of Things (IoT) is becoming more integrated in our daily life with the increasing number of embedded electronic devices interacting together. These electronic devices are often controlled by a Micro-Controller Unit (MCU). As an example, it is estimated that today’s well-equipped automobile uses more than 50 MCUs. Some MCUs contain cryptographic co-processors to enhance the security of the exchanged and stored data with a common belief that the data is secured and safe. However many MCUs have been shown to be vulnerable to Fault Injection (FI) attacks. These attacks can reveal shared secrets, firmware, and other confidential information. In addition, this extracted information obtained by attacks can lead to identification of new vulnerabilities which may scale to attacks on many devices. In general, FI on MCUs corrupt data or corrupt instructions. Although it is assumed that only authorized personnel with access to cryptographic secrets will gain access to confidential information in MCUs, attackers in specialized labs nowadays may have access to high-tech equipment which could be used to attack these MCUs. Laser Fault Injection (LFI) is gaining more of a reputation for its ability to inject local faults rather than global ones due to its precision, thus providing a greater risk of breaking security in many devices. Although publications have generally discussed the topic of security of MCUs, attack techniques are diverse and published LFI provides few and superficial details about the used experimental setup and methodology. Furthermore, limited research has examined the combination of both LFI and Photo-Emission Microscopy (PEM), direct modification of instructions using the LFI, control of embedded processor resets using LFI, and countermeasures which simultaneously thwart other aspects including decapsulation and reverse engineering (RE). This thesis contributes to the study of the MCUs’ security by analyzing their susceptibility to LFI attacks and PEM. The proposed research aims to build a LFI bench from scratch allowing maximum control of laser parameters. In addition, a methodology for analysis of the Device Under Attack (DUA) in preparation for LFI is proposed, including frontside/backside decapsulation methods, and visualization of the structure of the DUA. Analysis of attack viability of different targets on the DUA, including One-Time Programmable (OTP) memory, Flash memory and Static Random Access Memory (SRAM) was performed. A realistic attack of a cryptographic algorithm, such as Advanced Encryption Standard (AES) using LFI was conducted. On the other hand, countermeasures to the proposed attack techniques, including decapsulation/RE, LFI and PEM, were discussed. This dissertation provides a summary for the necessary background and experimental setup to study the possibility of LFI and PEM in different DUAs of two different technologies, specifically PIC16F687 and ARM Cortex-M0 LPC1114FN28102. Attacks performed on on-chip peripherals such as Universal Asynchronous Receiver/Transmitter (UART) and debug circuity reveal new vulnerabilities. This research is important for understanding attacks in order to design countermeasures for securing future hardware

    Smart card security

    Get PDF
    Smart Card devices are commonly used on many secure applications where there is a need to identify the card holder in order to provide a personalised service. The value of access to locked data and services makes Smart Cards a desirable attack target for hackers of all sorts. The range of attacks a Smart Card and its environment can be subjected to ranges from social engineering to exploiting hardware and software bugs and features. This research has focused on several hardware related attacks and potential threats. Namely, power glitch attack, power analysis, laser attack, the potential effect on security of memory power consumption reduction techniques and using a re-configurable instruction set as method to harden opcode interpretation. A semi-automated simulation environment to test designs against glitch attacks and power analysis has been developed. This simulation environment can be easily integrated within Atmel’s design flow to bring assurance of their designs’ behaviour and permeability to such attacks at an early development stage. Previous power analysis simulation work focused on testing the implementation of part of the cryptographic algorithm. This work focuses on targeting the whole algorithm, allowing the test of a wider range of countermeasures. A common glitch detection approach is monitoring the power supply for abnormal voltage values and fluctuations. This approach can fail to detect some fast glitches. The alternative approach used in this research monitors the effects of a glitch on a mono-stable circuit sensitive to fault injection by glitch attacks. This work has resulted in a patented glitch detector that improves the overall glitch detection range. The use of radiation countermeasures as laser countermeasures and potential sensors has been investigated too. Radiation and laser attacks have similar effects on silicon devices. Whilst several countermeasures against radiation have been developed over the years, almost no explicit mention of laser countermeasures was found. This research has demonstrated the suitability of using some radiation countermeasures as laser countermeasures. Memory partitioning is a static and dynamic power consumption reduction technique successfully used in various devices. The nature of Smart Card devices restricts the applicability of some aspects of this power reduction technique. This research line has resulted in the proposal of a memory partitioning approach suitable to Smart Cards

    Project and development of hardware accelerators for fast computing in multimedia processing

    Get PDF
    2017 - 2018The main aim of the present research work is to project and develop very large scale electronic integrated circuits, with particular attention to the ones devoted to image processing applications and the related topics. In particular, the candidate has mainly investigated four topics, detailed in the following. First, the candidate has developed a novel multiplier circuit capable of obtaining floating point (FP32) results, given as inputs an integer value from a fixed integer range and a set of fixed point (FI) values. The result has been accomplished exploiting a series of theorems and results on a number theory problem, known as Bachet’s problem, which allows the development of a new Distributed Arithmetic (DA) based on 3’s partitions. This kind of application results very fit for filtering applications working on an integer fixed input range, such in image processing applications, in which the pixels are coded on 8 bits per channel. In fact, in these applications the main problem is related to the high area and power consumption due to the presence of many Multiply and Accumulate (MAC) units, also compromising real-time requirements due to the complexity of FP32 operations. For these reasons, FI implementations are usually preferred, at the cost of lower accuracies. The results for the single multiplier and for a filter of dimensions 3x3 show respectively delay of 2.456 ns and 4.7 ns on FPGA platform and 2.18 ns and 4.426 ns on 90nm std_cell TSMC 90 nm implementation. Comparisons with state-of-the-art FP32 multipliers show a speed increase of up to 94.7% and an area reduction of 69.3% on FPGA platform. ... [edited by Author]XXXI cicl

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Null convention logic circuits for asynchronous computer architecture

    Get PDF
    For most of its history, computer architecture has been able to benefit from a rapid scaling in semiconductor technology, resulting in continuous improvements to CPU design. During that period, synchronous logic has dominated because of its inherent ease of design and abundant tools. However, with the scaling of semiconductor processes into deep sub-micron and then to nano-scale dimensions, computer architecture is hitting a number of roadblocks such as high power and increased process variability. Asynchronous techniques can potentially offer many advantages compared to conventional synchronous design, including average case vs. worse case performance, robustness in the face of process and operating point variability and the ready availability of high performance, fine grained pipeline architectures. Of the many alternative approaches to asynchronous design, Null Convention Logic (NCL) has the advantage that its quasi delay-insensitive behavior makes it relatively easy to set up complex circuits without the need for exhaustive timing analysis. This thesis examines the characteristics of an NCL based asynchronous RISC-V CPU and analyses the problems with applying NCL to CPU design. While a number of university and industry groups have previously developed small 8-bit microprocessor architectures using NCL techniques, it is still unclear whether these offer any real advantages over conventional synchronous design. A key objective of this work has been to analyse the impact of larger word widths and more complex architectures on NCL CPU implementations. The research commenced by re-evaluating existing techniques for implementing NCL on programmable devices such as FPGAs. The little work that has been undertaken previously on FPGA implementations of asynchronous logic has been inconclusive and seems to indicate that asynchronous systems cannot be easily implemented in these devices. However, most of this work related to an alternative technique called bundled data, which is not well suited to FPGA implementation because of the difficulty in controlling and matching delays in a 'bundle' of signals. On the other hand, this thesis clearly shows that such applications are not only possible with NCL, but there are some distinct advantages in being able to prototype complex asynchronous systems in a field-programmable technology such as the FPGA. A large part of the value of NCL derives from its architectural level behavior, inherent pipelining, and optimization opportunities such as the merging of register and combina- tional logic functions. In this work, a number of NCL multiplier architectures have been analyzed to reveal the performance trade-offs between various non-pipelined, 1D and 2D organizations. Two-dimensional pipelining can easily be applied to regular architectures such as array multipliers in a way that is both high performance and area-efficient. It was found that the performance of 2D pipelining for small networks such as multipliers is around 260% faster than the equivalent non-pipelined design. However, the design uses 265% more transistors so the methodology is mainly of benefit where performance is strongly favored over area. A pipelined 32bit x 32bit signed Baugh-Wooley multiplier with Wallace-Tree Carry Save Adders (CSA), which is representative of a real design used for CPUs and DSPs, was used to further explore this concept as it is faster and has fewer pipeline stages compared to the normal array multiplier using Ripple-Carry adders (RCA). It was found that 1D pipelining with ripple-carry chains is an efficient implementation option but becomes less so for larger multipliers, due to the completion logic for which the delay time depends largely on the number of bits involved in the completion network. The average-case performance of ripple-carry adders was explored using random input vectors and it was observed that it offers little advantage on the smaller multiplier blocks, but this particular timing characteristic of asynchronous design styles be- comes increasingly more important as word size grows. Finally, this research has resulted in the development of the first 32-Bit asynchronous RISC-V CPU core. Called the Redback RISC, the architecture is a structure of pipeline rings composed of computational oscillations linked with flow completeness relationships. It has been written using NELL, a commercial description/synthesis tool that outputs standard Verilog. The Redback has been analysed and compared to two approximately equivalent industry standard 32-Bit synchronous RISC-V cores (PicoRV32 and Rocket) that are already fabricated and used in industry. While the NCL implementation is larger than both commercial cores it has similar performance and lower power compared to the PicoRV32. The implementation results were also compared against an existing NCL design tool flow (UNCLE), which showed how much the results of these implementation strategies differ. The Redback RISC has achieved similar level of throughput and 43% better power and 34% better energy compared to one of the synchronous cores with the same benchmark test and test condition such as input sup- ply voltage. However, it was shown that area is the biggest drawback for NCL CPU design. The core is roughly 2.5× larger than synchronous designs. On the other hand its area is still 2.9× smaller than previous designs using UNCLE tools. The area penalty is largely due to the unavoidable translation into a dual-rail topology when using the standard NCL cell library

    The 1991 3rd NASA Symposium on VLSI Design

    Get PDF
    Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2

    Topical Workshop on Electronics for Particle Physics

    Get PDF
    The purpose of the workshop was to present results and original concepts for electronics research and development relevant to particle physics experiments as well as accelerator and beam instrumentation at future facilities; to review the status of electronics for the LHC experiments; to identify and encourage common efforts for the development of electronics; and to promote information exchange and collaboration in the relevant engineering and physics communities
    corecore