2,129 research outputs found

    Ingress of threshold voltage-triggered hardware trojan in the modern FPGA fabric–detection methodology and mitigation

    Get PDF
    The ageing phenomenon of negative bias temperature instability (NBTI) continues to challenge the dynamic thermal management of modern FPGAs. Increased transistor density leads to thermal accumulation and propagates higher and non-uniform temperature variations across the FPGA. This aggravates the impact of NBTI on key PMOS transistor parameters such as threshold voltage and drain current. Where it ages the transistors, with a successive reduction in FPGA lifetime and reliability, it also challenges its security. The ingress of threshold voltage-triggered hardware Trojan, a stealthy and malicious electronic circuit, in the modern FPGA, is one such potential threat that could exploit NBTI and severely affect its performance. The development of an effective and efficient countermeasure against it is, therefore, highly critical. Accordingly, we present a comprehensive FPGA security scheme, comprising novel elements of hardware Trojan infection, detection, and mitigation, to protect FPGA applications against the hardware Trojan. Built around the threat model of a naval warship’s integrated self-protection system (ISPS), we propose a threshold voltage-triggered hardware Trojan that operates in a threshold voltage region of 0.45V to 0.998V, consuming ultra-low power (10.5nW), and remaining stealthy with an area overhead as low as 1.5% for a 28 nm technology node. The hardware Trojan detection sub-scheme provides a unique lightweight threshold voltage-aware sensor with a detection sensitivity of 0.251mV/nA. With fixed and dynamic ring oscillator-based sensor segments, the precise measurement of frequency and delay variations in response to shifts in the threshold voltage of a PMOS transistor is also proposed. Finally, the FPGA security scheme is reinforced with an online transistor dynamic scaling (OTDS) to mitigate the impact of hardware Trojan through run-time tolerant circuitry capable of identifying critical gates with worst-case drain current degradation

    An Artificial Neural Networks based Temperature Prediction Framework for Network-on-Chip based Multicore Platform

    Get PDF
    Continuous improvement in silicon process technologies has made possible the integration of hundreds of cores on a single chip. However, power and heat have become dominant constraints in designing these massive multicore chips causing issues with reliability, timing variations and reduced lifetime of the chips. Dynamic Thermal Management (DTM) is a solution to avoid high temperatures on the die. Typical DTM schemes only address core level thermal issues. However, the Network-on-chip (NoC) paradigm, which has emerged as an enabling methodology for integrating hundreds to thousands of cores on the same die can contribute significantly to the thermal issues. Moreover, the typical DTM is triggered reactively based on temperature measurements from on-chip thermal sensor requiring long reaction times whereas predictive DTM method estimates future temperature in advance, eliminating the chance of temperature overshoot. Artificial Neural Networks (ANNs) have been used in various domains for modeling and prediction with high accuracy due to its ability to learn and adapt. This thesis concentrates on designing an ANN prediction engine to predict the thermal profile of the cores and Network-on-Chip elements of the chip. This thermal profile of the chip is then used by the predictive DTM that combines both core level and network level DTM techniques. On-chip wireless interconnect which is recently envisioned to enable energy-efficient data exchange between cores in a multicore environment, will be used to provide a broadcast-capable medium to efficiently distribute thermal control messages to trigger and manage the DTM schemes

    The Thermal-Constrained Real-Time Systems Design on Multi-Core Platforms -- An Analytical Approach

    Get PDF
    Over the past decades, the shrinking transistor size enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Over the past decades, the shrinking transistor size, benefited from the advancement of IC technology, enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Different from many existing DTM heuristics that are based on simple intuitions, we seek to address the thermal problems through a rigorous analytical approach, to achieve the high predictability requirement in real-time system design. In this regard, we have made a number of important contributions. First, we develop a series of lemmas and theorems that are general enough to uncover the fundamental principles and characteristics with regard to the thermal model, peak temperature identification and peak temperature reduction, which are key to thermal-constrained real-time computer system design. Second, we develop a design-time frequency and voltage oscillating approach on multi-core platforms, which can greatly enhance the system throughput and its service capacity. Third, different from the traditional workload balancing approach, we develop a thermal-balancing approach that can substantially improve the energy efficiency and task partitioning feasibility, especially when the system utilization is high or with a tight temperature constraint. The significance of our research is that, not only can our proposed algorithms on throughput maximization and energy conservation outperform existing work significantly as demonstrated in our extensive experimental results, the theoretical results in our research are very general and can greatly benefit other thermal-related research

    Ultra Low Power Analog Circuits for Wireless Sensor Node System.

    Full text link
    This thesis will discuss essential analog circuit blocks required in ultra-low power wireless sensor node systems. A wireless sensor network system requires very high energy and power efficiency which is difficult to achieve with traditional analog circuits. First, 5.58nW real time clock using a DLL (Delay Locked Loop)-assisted pulse-driven crystal oscillator is discussed. In this circuit, the operational amplifier used in the traditional circuit was replaced with pulsed drivers. The pulse was generated at precise timing by a DLL. The circuit parts operate in different supply levels, generated on chip by using a switched capacitor network. The circuit was tested at different supply voltage and temperature. Its frequency characteristic along with power consumption were measured and compared to the traditional circuit. Next, a Schmitt trigger based pulse-driven crystal oscillator is discussed. In the first chapter, a DLL was used to generate a pulse with precise timing. However, testing results and recent study showed that the crystal oscillator can sustain oscillation even with inaccurate pulse timing. In this chapter, pulse location is determined by the Schmitt trigger. Simulation results show that this structure can still sustain oscillation at different process corners and temperature. In the next chapter, a sub-nW 8 bit SAR ADC (Successive Approximation Analog-to-Digital Converter) using transistor-stack DAC (Digital-to-Analog Converter) is discussed. To facilitate design effort and reduce the layout dependent effect, a conventional capacitive DAC was replaced with transistor-stack DAC with a 255:1 multiplexer. The control logic was designed with both TSPC (True Single Phase Clock) and CMOS logic to minimize transistor count. The ADC was implemented in a 65nm CMOS process and tested at different sampling rates and input signal frequency. Its linearity and power consumption was measured. Also, a similar design was implemented and tested using 180nm CMOS process as part of a sensor node system. Lastly, a multiple output level voltage regulator using a switched capacitor network for low-cost system is discussed.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111626/1/dmyoon_1.pd

    Design-for-delay-testability techniques for high-speed digital circuits

    Get PDF
    The importance of delay faults is enhanced by the ever increasing clock rates and decreasing geometry sizes of nowadays' circuits. This thesis focuses on the development of Design-for-Delay-Testability (DfDT) techniques for high-speed circuits and embedded cores. The rising costs of IC testing and in particular the costs of Automatic Test Equipment are major concerns for the semiconductor industry. To reverse the trend of rising testing costs, DfDT is\ud getting more and more important

    Hierarchical Agent-based Adaptation for Self-Aware Embedded Computing Systems

    Get PDF
    Siirretty Doriast

    Adaptive Knobs for Resource Efficient Computing

    Get PDF
    Performance demands of emerging domains such as artificial intelligence, machine learning and vision, Internet-of-things etc., continue to grow. Meeting such requirements on modern multi/many core systems with higher power densities, fixed power and energy budgets, and thermal constraints exacerbates the run-time management challenge. This leaves an open problem on extracting the required performance within the power and energy limits, while also ensuring thermal safety. Existing architectural solutions including asymmetric and heterogeneous cores and custom acceleration improve performance-per-watt in specific design time and static scenarios. However, satisfying applications’ performance requirements under dynamic and unknown workload scenarios subject to varying system dynamics of power, temperature and energy requires intelligent run-time management. Adaptive strategies are necessary for maximizing resource efficiency, considering i) diverse requirements and characteristics of concurrent applications, ii) dynamic workload variation, iii) core-level heterogeneity and iv) power, thermal and energy constraints. This dissertation proposes such adaptive techniques for efficient run-time resource management to maximize performance within fixed budgets under unknown and dynamic workload scenarios. Resource management strategies proposed in this dissertation comprehensively consider application and workload characteristics and variable effect of power actuation on performance for pro-active and appropriate allocation decisions. Specific contributions include i) run-time mapping approach to improve power budgets for higher throughput, ii) thermal aware performance boosting for efficient utilization of power budget and higher performance, iii) approximation as a run-time knob exploiting accuracy performance trade-offs for maximizing performance under power caps at minimal loss of accuracy and iv) co-ordinated approximation for heterogeneous systems through joint actuation of dynamic approximation and power knobs for performance guarantees with minimal power consumption. The approaches presented in this dissertation focus on adapting existing mapping techniques, performance boosting strategies, software and dynamic approximations to meet the performance requirements, simultaneously considering system constraints. The proposed strategies are compared against relevant state-of-the-art run-time management frameworks to qualitatively evaluate their efficacy

    Wide-Supply-Range All-Digital Leakage Variation Sensor for On-Chip Process and Temperature Monitoring

    Get PDF
    Variation in process, voltage and temperature is a major obstacle in achieving energy-efficient operation of LSI. This paper proposes an all-digital on-chip circuit to monitor leakage current variations of both of the nMOSFET and pMOSFET independently. As leakage current is highly sensitive to threshold voltage and temperature, the circuit is suitable for tracking process and temperature variation. The circuit uses reconfigurable inhomogeneity to obtain statistical properties from a single monitor instance. A compact reconfigurable inverter topology is proposed to implement the monitor circuit. The compact and digital nature of the inverter enables cell-based design, which will reduce design costs. Measurement results from a 65 nm test chip show the validity of the proposed circuit. For a 124 sample size for both of the nMOSFET and pMOSFET, the monitor area is 4500 ÎŒm2 and active power consumption is 76 nW at 0.8 V operation. The proposed technique enables area-efficient and low-cost implementation thus can be used in product chips for applications such as dynamic energy and thermal management, testing and post-silicon tuning
    • 

    corecore