235 research outputs found

    Reliability and security in low power circuits and systems

    Get PDF
    With the massive deployment of mobile devices in sensitive areas such as healthcare and defense, hardware reliability and security have become hot research topics in recent years. These topics, although different in definition, are usually correlated. This dissertation offers an in-depth treatment on enhancing the reliability and security of low power circuits and systems. The first part of the dissertation deals with the reliability of sub-threshold designs, which use supply voltage lower than the threshold voltage (Vth) of transistors to reduce power. The exponential relationship between delay and Vth significantly jeopardizes their reliability due to process variation induced timing violations. In order to address this problem, this dissertation proposes a novel selective body biasing scheme. In the first work, the selective body biasing problem is formulated as a linearly constrained statistical optimization model, and the adaptive filtering concept is borrowed from the signal processing community to develop an efficient solution. However, since the adaptive filtering algorithm lacks theoretical justification and guaranteed convergence rate, in the second work, a new approach based on semi-infinite programming with incremental hypercubic sampling is proposed, which demonstrates better solution quality with shorter runtime. The second work deals with the security of low power crypto-processors, equipped with Random Dynamic Voltage Scaling (RDVS), in the presence of Correlation Power Analysis (CPA) attacks. This dissertation firstly demonstrates that the resistance of RDVS to CPA can be undermined by lowering power supply voltage. Then, an alarm circuit is proposed to resist this attack. However, the alarm circuit will lead to potential denial-of-service due to noise-triggered false alarms. A non-zero sum game model is then formulated and the Nash Equilibria is analyzed --Abstract, page iii

    Chapter One – An Overview of Architecture-Level Power- and Energy-Efficient Design Techniques

    Get PDF
    Power dissipation and energy consumption became the primary design constraint for almost all computer systems in the last 15 years. Both computer architects and circuit designers intent to reduce power and energy (without a performance degradation) at all design levels, as it is currently the main obstacle to continue with further scaling according to Moore's law. The aim of this survey is to provide a comprehensive overview of power- and energy-efficient “state-of-the-art” techniques. We classify techniques by component where they apply to, which is the most natural way from a designer point of view. We further divide the techniques by the component of power/energy they optimize (static or dynamic), covering in that way complete low-power design flow at the architectural level. At the end, we conclude that only a holistic approach that assumes optimizations at all design levels can lead to significant savings.Peer ReviewedPostprint (published version

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff

    Leakage Power Reduction for Deeply-Scaled FinFET Circuits Operating in Multiple Voltage Regimes Using Fine-Grained Gate- Length Biasing Technique

    Get PDF
    Abstract-With the aggressive downscaling of the process technologies and importance of battery-powered systems, reducing leakage power consumption has become one of the most crucial design challenges for IC designers. This paper presents a devicecircuit cross-layer framework to utilize fine-grained gate-length biased FinFETs for circuit leakage power reduction in the near-and super-threshold operation regimes. The impacts of Gate-Length Biasing (GLB) on circuit speed and leakage power are first studied using one of the most advanced technology nodes -a 7nm FinFET technology. Then multiple standard cell libraries using different leakage reduction techniques, such as GLB and Dual-VT, are built in multiple operating regimes at this technology node. It is demonstrated that, compared to Dual-VT, GLB is a more suitable technique for the advanced 7nm FinFET technology due to its capability of delivering a finer-grained trade-off between the leakage power and circuit speed, not to mention the lower manufacturing cost. The circuit synthesis results of a variety of ISCAS benchmark circuits using the presented GLB 7nm FinFET cell libraries show up to 70% leakage improvement with zero degradation in circuit speed in the near-and super-threshold regimes, respectively, compared to the standard 7nm FinFET cell library

    Exploiting Adaptive Techniques to Improve Processor Energy Efficiency

    Get PDF
    Rapid device-miniaturization keeps on inducing challenges in building energy efficient microprocessors. As the size of the transistors continuously decreasing, more uncertainties emerge in their operations. On the other hand, integrating more and more transistors on a single chip accentuates the need to lower its supply-voltage. This dissertation investigates one of the primary device uncertainties - timing error, in microprocessor performance bottleneck in NTC era. Then it proposes various innovative techniques to exploit these opportunities to maintain processor energy efficiency, in the context of emerging challenges. Evaluated with the cross-layer methodology, the proposed approaches achieve substantial improvements in processor energy efficiency, compared to other start-of-art techniques

    Low Power Memory/Memristor Devices and Systems

    Get PDF
    This reprint focusses on achieving low-power computation using memristive devices. The topic was designed as a convenient reference point: it contains a mix of techniques starting from the fundamental manufacturing of memristive devices all the way to applications such as physically unclonable functions, and also covers perspectives on, e.g., in-memory computing, which is inextricably linked with emerging memory devices such as memristors. Finally, the reprint contains a few articles representing how other communities (from typical CMOS design to photonics) are fighting on their own fronts in the quest towards low-power computation, as a comparison with the memristor literature. We hope that readers will enjoy discovering the articles within

    Optimizing SIMD execution in HW/SW co-designed processors

    Get PDF
    SIMD accelerators are ubiquitous in microprocessors from different computing domains. Their high compute power and hardware simplicity improve overall performance in an energy efficient manner. Moreover, their replicated functional units and simple control mechanism make them amenable to scaling to higher vector lengths. However, code generation for these accelerators has been a challenge from the days of their inception. Compilers generate vector code conservatively to ensure correctness. As a result they lose significant vectorization opportunities and fail to extract maximum benefits out of SIMD accelerators. This thesis proposes to vectorize the program binary at runtime in a speculative manner, in addition to the compile time static vectorization. There are different environments that support runtime profiling and optimization support required for dynamic vectorization, one of most prominent ones being: 1) Dynamic Binary Translators and Optimizers (DBTO) and 2) Hardware/Software (HW/SW) Co-designed Processors. HW/SW co-designed environment provides several advantages over DBTOs like transparent incorporations of new hardware features, binary compatibility, etc. Therefore, we use HW/SW co-designed environment to assess the potential of speculative dynamic vectorization. Furthermore, we analyze vector code generation for wider vector units and find out that even though SIMD accelerators are amenable to scaling from the hardware point of view, vector code generation at higher vector length is even more challenging. The two major factors impeding vectorization for wider SIMD units are: 1) Reduced dynamic instruction stream coverage for vectorization and 2) Large number of permutation instructions. To solve the first problem we propose Variable Length Vectorization that iteratively vectorizes for multiple vector lengths to improve dynamic instruction stream coverage. Secondly, to reduce the number of permutation instructions we propose Selective Writing that selectively writes to different parts of a vector register and avoids permutations. Finally, we tackle the problem of leakage energy in SIMD accelerators. Since SIMD accelerators consume significant amount of real estate on the chip, they become the principle source of leakage if not utilized judiciously. Power gating is one of the most widely used techniques to reduce leakage energy of functional units. However, power gating has its own energy and performance overhead associated with it. We propose to selectively devectorize the vector code when higher SIMD lanes are used intermittently. This selective devectorization keeps the higher SIMD lanes idle and power gated for maximum duration. Therefore, resulting in overall leakage energy reduction.Postprint (published version

    Manufacturability Aware Design.

    Full text link
    The aim of this work is to provide solutions that optimize the tradeoffs among design, manufacturability, and cost of ownership posed by technology scaling and sub-wavelength lithography. These solutions may take the form of robust circuit designs, cost-effective resolution technologies, accurate modeling considering process variations, and design rules assessment. We first establish a framework for assessing the impact of process variation on circuit performance, product value and return on investment on alternative processes. Key features include comprehensive modeling and different handling on die-to-die and within-die variation, accurate models of correlations of variation, realistic and quantified projection to future process nodes, and performance sensitivity analysis to improved control of individual device parameter and variation sources. Then we describe a novel minimum cost of correction methodology which determines the level of correction of each layout feature such that the prescribed parametric yield is attained with minimum RET (Resolution Enhancement Technology) cost. This timing driven OPC (Optical Proximity Correction) insertion flow uses a mathematical programming based slack budgeting algorithm to determine OPC level for all polysilicon gate geometries. Designs adopting this methodology show up to 20% MEBES (Manufacturing Electron Beam Exposure System) data volume reduction and 39% OPC runtime improvement. When the systematic correction residual errors become unavoidable, we analyze their impact on a state-of-art microprocessor's speedpath skew. A platform is created for diagnosing and improving OPC quality on gates with specific functionality such as critical gates or matching transistors. Significant changes in full-chip timing analysis indicate the necessity of a post-OPC performance verification design flow. Finally, we quantify the performance, manufacturability and mask cost impact of globally applying several common restrictive design rules. Novel approaches such as locally adapting FDRs (flexible design rules) based on image parameters range, and DRC Plus (preferred design rule enforcement with 2D pattern matching) are also described.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57676/2/jiey_1.pd

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
    • …
    corecore