182 research outputs found
Probabilistic Circuit Architecture Using Statistical Learning
The main achievement of this project is the generalization of probabilistic circuit architecture. In other words, in mapping MRF into CMOS circuitry, one must fulfill two requirements; first bistable storage element for each logic state and second feedback network for belief propagation
Electrical and Computer Engineering Research Report 2008
Department Research New Chair Publications Enterprisehttps://digitalcommons.mtu.edu/ece-annualreports/1005/thumbnail.jp
In-memory computing with emerging memory devices: Status and outlook
Supporting data for "In-memory computing with emerging memory devices: status and outlook", submitted to APL Machine Learning
Recommended from our members
Heat Dissipation Bounds for Nanocomputing: Methodology and Applications
Heat dissipation is a critical challenge facing the realization of emerging nanocomputing technologies. There are different components of this dissipation, and a part of it comes from the unavoidable cost of implementing logically irreversible operations. This stems from the fact that information is physical and manipulating it irreversibly requires energy. The unavoidable dissipative cost of losing information irreversibly fixes the fundamental limit on the minimum energy cost for computational strategies that utilize ubiquitous irreversible information processing.
A relation between the amount of irreversible information loss in a circuit and the associated energy dissipation was formulated by Landauer\u27s Principle in a technology-independent form. In a computing circuit, in addition to the nformation-theoretic dissipation, other physical processes that take place in association with irreversible information loss may also have an unavoidable thermodynamic cost that originates from the structure and operation of the circuit. In conventional CMOS circuits such unavoidable costs constitute only a minute fraction of the total power budget, however, in nanocircuits, it may be of critical significance due to the high density and operation speeds required. The lower bounds on energy, when obtained by considering the irreversible information cost as well as unavoidable costs associated with the operation of the underlying computing paradigm, may provide insight into the fundamental limitations of emerging technologies. This motivates us to study the problem of determining heat dissipation of computation in a way that reveals fundamental lower bounds on the energy cost for circuits realized in new computing paradigms.
In this work, we propose a physical-information-theoretic methodology that enables us to obtain such bounds for the minimum energy requirements of computation for concrete circuits realized within specific paradigms, and illustrate its application via prominent nanacomputing proposals. We begin by introducing the unavoidable heat dissipation problem and emphasize the significance of limitations it imposes on emerging technologies. We present the methodology developed to obtain the lower bounds on the unavoidable dissipation cost of computation for nanoelectronic circuits. We demonstrate our methodology via its application to various non-transistor-based (e.g. QCA) and transistor-based (e.g. NASIC) nanocomputing circuits. We also employ two CMOS circuits, in order to provide further insight into the application of our methodology by using this well-known conventional paradigm. We expand our methodology to modularize the dissipation analysis for QCA and NASIC paradigms, and discuss prospects for automation. We also revisit key concepts in thermodynamics of computation by focusing on the criticisms raised against the validity of Landauer\u27s Principle. We address these arguments and discuss their implications for our methodology. We conclude by elaborating possible directions towards which this work can be expanded
Reliable chip design from low powered unreliable components
The pace of technological improvement of the semiconductor market is driven by Moore’s Law, enabling chip transistor density to double every two years. The transistors would continue to decline in cost and size but increase in power. The continuous transistor scaling and extremely lower power constraints in modern Very Large Scale Integrated(VLSI) chips can potentially supersede the benefits of the technology shrinking due to reliability issues. As VLSI technology scales into nanoscale regime, fundamental physical limits are approached, and higher levels of variability, performance degradation, and higher rates of manufacturing defects are experienced. Soft errors, which traditionally affected only the memories, are now also resulting in logic circuit reliability degradation. A solution to these limitations is to integrate reliability assessment techniques into the Integrated Circuit(IC) design flow. This thesis investigates four aspects of reliability driven circuit design: a)Reliability estimation; b) Reliability optimization; c) Fault-tolerant techniques, and d) Delay degradation analysis. To guide the reliability driven synthesis and optimization of combinational circuits, highly accurate probability based reliability estimation methodology christened Conditional Probabilistic Error Propagation(CPEP) algorithm is developed to compute the impact of gate failures on the circuit output. CPEP guides the proposed rewriting based logic optimization algorithm employing local transformations. The main idea behind this methodology is to replace parts of the circuit with functionally equivalent but more reliable counterparts chosen from a precomputed subset of Negation-Permutation-Negation(NPN) classes of 4-variable functions. Cut enumeration and Boolean matching driven by reliability-aware optimization algorithm are used to identify the best possible replacement candidates. Experiments on a set of MCNC benchmark circuits and 8051 functional microcontroller units indicate that the proposed framework can achieve up to 75% reduction of output error probability. On average, about 14% SER reduction is obtained at the expense of very low area overhead of 6.57% that results in 13.52% higher power consumption. The next contribution of the research describes a novel methodology to design fault tolerant circuitry by employing the error correction codes known as Codeword Prediction Encoder(CPE). Traditional fault tolerant techniques analyze the circuit reliability issue from a static point of view neglecting the dynamic errors. In the context of communication and storage, the study of novel methods for reliable data transmission under unreliable hardware is an increasing priority. The idea of CPE is adapted from the field of forward error correction for telecommunications focusing on both encoding aspects and error correction capabilities. The proposed Augmented Encoding solution consists of computing an augmented codeword that contains both the codeword to be transmitted on the channel and extra parity bits. A Computer Aided Development(CAD) framework known as CPE simulator is developed providing a unified platform that comprises a novel encoder and fault tolerant LDPC decoders. Experiments on a set of encoders with different coding rates and different decoders indicate that the proposed framework can correct all errors under specific scenarios. On average, about 1000 times improvement in Soft Error Rate(SER) reduction is achieved. Last part of the research is the Inverse Gaussian Distribution(IGD) based delay model applicable to both combinational and sequential elements for sub-powered circuits. The Probability Density Function(PDF) based delay model accurately captures the delay behavior of all the basic gates in the library database. The IGD model employs these necessary parameters, and the delay estimation accuracy is demonstrated by evaluating multiple circuits. Experiments results indicate that the IGD based approach provides a high matching against HSPICE Monte Carlo simulation results, with an average error less than 1.9% and 1.2% for the 8-bit Ripple Carry Adder(RCA), and 8-bit De-Multiplexer(DEMUX) and Multiplexer(MUX) respectively
Variability-aware architectures based on hardware redundancy for nanoscale reliable computation
During the last decades, human beings have experienced a significant enhancement in the quality of life thanks in large part to the fast evolution of Integrated Circuits (IC). This unprecedented technological race, along with its significant economic impact, has been grounded on the production of complex processing systems from highly reliable compounding devices. However, the fundamental assumption of nearly ideal devices, which has been true within the past CMOS technology generations, today seems to be coming to an end. In fact, as MOSFET technology scales into nanoscale regime it approaches to fundamental physical limits and starts experiencing higher levels of variability, performance degradation, and higher rates of manufacturing defects. On the other hand, ICs with increasing number of transistors require a decrease in the failure rate per device in order to maintain the overall chip reliability. As a result, it is becoming increasingly important today the development of circuit architectures capable of providing reliable computation while tolerating high levels of variability and defect rates.
The main objective of this thesis is to analyze and propose new fault-tolerant architectures based on redundancy for future technologies. Our research is founded on the principles of redundancy established by von Neumann in the 1950s and extends them to three new dimensions:
1. Heterogeneity: Most of the works on fault-tolerant architectures based on redundancy assume homogeneous variability in the replicas like von Neumann's original work. Instead, we explore the possibilities of redundancy when heterogeneity between replicas is taken into account. In this sense, we propose compensating mechanisms that select the weighting of the redundant information to maximize the overall reliability.
2. Asynchrony: Each of the replicas of a redundant system may have associated different processing delays due to variability and degradation; especially in future nanotechnologies. If we design our system to work locally in asynchronous mode then we may consider different voting policies to deal with the redundant information. Depending on how many replicas we collect before taking a decision we can obtain different trade-off between processing delay and reliability. We propose a mechanism for providing these facilities and analyze and simulate its operation.
3. Hierarchy: Finally, we explore the possibilities of redundancy applied at different hierarchy layers of complex processing systems. We propose to distribute redundancy across the various hierarchy layers and analyze the benefits that can be obtained.
Drawing on the scenario of future ICs technologies, we push the concept of redundancy to its fullest expression through the study of realistic nano-device architectures. Most of the redundant architectures considered so far do not face properly the era of Terascale Computing and the nanotechnology trends. Since von Neumann applied for the first time redundancy at electronic circuits, never until now effects as common in nanoelectronics as degradation and interconnection failures have been treated directly from the standpoint of redundancy. In this thesis we address in a comprehensive manner the reliability of digital processing systems in the upcoming technology generations
- …