985 research outputs found

    Simulation and Experimental Demonstration of the Importance of IR-Drops During Laser Fault-Injection

    Get PDF
    International audienceLaser fault injections induce transient faults into ICs by locally generating transient currents that temporarily flip the outputs of the illuminated gates. Laser fault injection can be anticipated or studied by using simulation tools at different abstraction levels: physical, electrical or logical. At the electrical level, the classical laser-fault injection model is based on the addition of current sources to the various sensitive nodes of CMOS transistors. However, this model does not take into account the large transient current components also induced between the VDD and GND of ICs designed with advanced CMOS technologies. These short-circuit currents provoke a significant IR-drop that contribute to the fault injection process. This paper describes our research on the assessment of this contribution. It shows through simulation and experiments that during laser fault injection campaigns, laser-induced IR-drop is always present when considering circuits designed with deep submicron technologies. It introduces an enhanced electrical fault model taking the laser-induced IR-drop into account. It also proposes a methodology that allows the use of the model to simulate laser-induced faults at the electrical level in large-scale circuits. On the basis of further simulations and experimental results, we found that, depending on the laser pulse characteristics, the number of injected faults may be underestimated by a factor of up to 2.4 if the laser-induced IR-drop is ignored. This could lead to incorrect estimations of the fault injection threshold, which is especially relevant to the design of countermeasure techniques for secure integrated systems

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    System engineering toolbox for design-oriented engineers

    Get PDF
    This system engineering toolbox is designed to provide tools and methodologies to the design-oriented systems engineer. A tool is defined as a set of procedures to accomplish a specific function. A methodology is defined as a collection of tools, rules, and postulates to accomplish a purpose. For each concept addressed in the toolbox, the following information is provided: (1) description, (2) application, (3) procedures, (4) examples, if practical, (5) advantages, (6) limitations, and (7) bibliography and/or references. The scope of the document includes concept development tools, system safety and reliability tools, design-related analytical tools, graphical data interpretation tools, a brief description of common statistical tools and methodologies, so-called total quality management tools, and trend analysis tools. Both relationship to project phase and primary functional usage of the tools are also delineated. The toolbox also includes a case study for illustrative purposes. Fifty-five tools are delineated in the text
    • …
    corecore