2,842 research outputs found

    Design of variability compensation architectures of digital circuits with adaptive body bias

    Get PDF
    The most critical concern in circuit is to achieve high level of performance with very tight power constraint. As the high performance circuits moved beyond 45nm technology one of the major issues is the parameter variation i.e. deviation in process, temperature and voltage (PVT) values from nominal specifications. A key process parameter subject to variation is the transistor threshold voltage (Vth) which impacts two important parameters: frequency and leakage power. Although the degradation can be compensated by the worstcase scenario based over-design approach, it induces remarkable power and performance overhead which is undesirable in tightly constrained designs. Dynamic voltage scaling (DVS) is a more power efficient approach, however its coarse granularity implies difficulty in handling fine grained variations. These factors have contributed to the growing interest in power aware robust circuit design. We propose a variability compensation architecture with adaptive body bias, for low power applications using 28nm FDSOI technology. The basic approach is based on a dynamic prediction and prevention of possible circuit timing errors. In our proposal we are using a Canary logic technique that enables the typical-case design. The body bias generation is based on a DLL type method which uses an external reference generator and voltage controlled delay line (VCDL) to generate the forward body bias (FBB) control signals. The adaptive technique is used for dynamic detection and correction of path failures in digital designs due to PVT variations. Instead of tuning the supply voltage, the key idea of the design approach is to tune the body bias voltage bymonitoring the error rate during operation. The FBB increases operating speed with an overhead in leakage power

    Monitor amb control strategies to reduce the impact of process variations in digital circuits

    Get PDF
    As CMOS technology scales down, Process, Voltage, Temperature and Ageing (PVTA) variations have an increasing impact on the performance and power consumption of electronic devices. These issues may hold back the continuous improvement of these devices in the near future. There are several ways to face the variability problem: to increase the operating margins of maximum clock frequency, the implementation of lithographic friendly layout styles, and the last one and the focus of this thesis, to adapt the circuit to its actual manufacturing and environment conditions by tuning some of the adjustable parameters once the circuit has been manufactured. The main challenge of this thesis is to develop a low-area variability compensation mechanism to automatically mitigate PVTA variations in run-time, i.e. while integrated circuit is running. This implies the development of a sensor to obtain the most accurate picture of variability, and the implementation of a control block to knob some of the electrical parameters of the circuit.A mesura que la tecnologia CMOS escala, les variacions de Procés, Voltatge, Temperatura i Envelliment (PVTA) tenen un impacte creixent en el rendiment i el consum de potència dels dispositius electrònics. Aquesta problemàtica podria arribar a frenar la millora contínua d'aquests dispositius en un futur proper. Hi ha diverses maneres d'afrontar el problema de la variabilitat: relaxar el marge de la freqüència màxima d'operació, implementar dissenys físics de xips més fàcils de litografiar, i per últim i com a tema principal d'aquesta tesi, adaptar el xip a les condicions de fabricació i d'entorn mitjançant la modificació d'algun dels seus paràmetres ajustables una vegada el circuit ja ha estat fabricat. El principal repte d'aquesta tesi és desenvolupar un mecanisme de compensació de variabilitat per tal de mitigar les variacions PVTA de manera automàtica en temps d'execució, és a dir, mentre el xip està funcionant. Això implica el desenvolupament d'un sensor capaç de mesurar la variabilitat de la manera més acurada possible, i la implementació d'un bloc de control que permeti l'ajust d'alguns dels paràmetres elèctrics dels circuits

    Variability-aware design of CMOS nanopower reference circuits

    Get PDF
    Questo lavoro è inserito nell'ambito della progettazione di circuiti microelettronici analogici con l'uso di tecnologie scalate, per le quali ha sempre maggiore importanza il problema della sensibilità delle grandezze alle variazioni di processo. Viene affrontata la progettazione di generatori di quantità di riferimento molto precisi, basati sull’uso di dispositivi che sono disponibili anche in tecnologie CMOS standard e che sono “intrinsecamente” più robusti rispetto alle variazioni di processo. Questo ha permesso di ottenere una bassa sensibilità al processo insieme ad un consumo di potenza estremamente ridotto, con il principale svantaggio di una elevata occupazione di area. Tutti i risultati sono stati ottenuti in una tecnologia 0.18μm CMOS. In particolare, abbiamo progettato un riferimento di tensione, ottenendo una deviazione standard relativa della tensione di riferimento dello 0.18% e un consumo di potenza inferiore a 70 nW, sulla base di misure su un set di 20 campioni di un singolo batch. Sono anche disponibili risultati relativi alla variabilità inter batch, che mostrano una deviazione standard relativa cumulativa della tensione di riferimento dello 0.35%. Abbiamo quindi progettato un riferimento di corrente, ottenendo anche in questo caso una sensibilità al processo della corrente di riferimento dell’1.4% con un consumo di potenza inferiore a 300 nW (questi sono risultati sperimentali ottenuti dalle misure su 20 campioni di un singolo batch). I riferimenti di tensione e di corrente proposti sono stati quindi utilizzati per la progettazione di un oscillatore a rilassamento a bassa frequenza, che unisce una ridotta sensibilità al processo, inferiore al 2%, con un basso consumo di potenza, circa 300 nW, ottenuto sulla base di simulazioni circuitali. Infine, nella progettazione dei blocchi sopra menzionati, abbiamo applicato un metodo per la determinazione della stabilità dei punti di riposo, basato sull’uso dei CAD standard utilizzati per la progettazione microelettronica. Questo approccio ci ha permesso di determinare la stabilità dei punti di riposo desiderati, e ci ha anche permesso di stabilire che i circuiti di start up spesso non sono necessari

    Reliability and security in low power circuits and systems

    Get PDF
    With the massive deployment of mobile devices in sensitive areas such as healthcare and defense, hardware reliability and security have become hot research topics in recent years. These topics, although different in definition, are usually correlated. This dissertation offers an in-depth treatment on enhancing the reliability and security of low power circuits and systems. The first part of the dissertation deals with the reliability of sub-threshold designs, which use supply voltage lower than the threshold voltage (Vth) of transistors to reduce power. The exponential relationship between delay and Vth significantly jeopardizes their reliability due to process variation induced timing violations. In order to address this problem, this dissertation proposes a novel selective body biasing scheme. In the first work, the selective body biasing problem is formulated as a linearly constrained statistical optimization model, and the adaptive filtering concept is borrowed from the signal processing community to develop an efficient solution. However, since the adaptive filtering algorithm lacks theoretical justification and guaranteed convergence rate, in the second work, a new approach based on semi-infinite programming with incremental hypercubic sampling is proposed, which demonstrates better solution quality with shorter runtime. The second work deals with the security of low power crypto-processors, equipped with Random Dynamic Voltage Scaling (RDVS), in the presence of Correlation Power Analysis (CPA) attacks. This dissertation firstly demonstrates that the resistance of RDVS to CPA can be undermined by lowering power supply voltage. Then, an alarm circuit is proposed to resist this attack. However, the alarm circuit will lead to potential denial-of-service due to noise-triggered false alarms. A non-zero sum game model is then formulated and the Nash Equilibria is analyzed --Abstract, page iii

    inSense: A Variation and Fault Tolerant Architecture for Nanoscale Devices

    Get PDF
    Transistor technology scaling has been the driving force in improving the size, speed, and power consumption of digital systems. As devices approach atomic size, however, their reliability and performance are increasingly compromised due to reduced noise margins, difficulties in fabrication, and emergent nano-scale phenomena. Scaled CMOS devices, in particular, suffer from process variations such as random dopant fluctuation (RDF) and line edge roughness (LER), transistor degradation mechanisms such as negative-bias temperature instability (NBTI) and hot-carrier injection (HCI), and increased sensitivity to single event upsets (SEUs). Consequently, future devices may exhibit reduced performance, diminished lifetimes, and poor reliability. This research proposes a variation and fault tolerant architecture, the inSense architecture, as a circuit-level solution to the problems induced by the aforementioned phenomena. The inSense architecture entails augmenting circuits with introspective and sensory capabilities which are able to dynamically detect and compensate for process variations, transistor degradation, and soft errors. This approach creates ``smart\u27\u27 circuits able to function despite the use of unreliable devices and is applicable to current CMOS technology as well as next-generation devices using new materials and structures. Furthermore, this work presents an automated prototype implementation of the inSense architecture targeted to CMOS devices and is evaluated via implementation in ISCAS \u2785 benchmark circuits. The automated prototype implementation is functionally verified and characterized: it is found that error detection capability (with error windows from \approx30-400ps) can be added for less than 2\% area overhead for circuits of non-trivial complexity. Single event transient (SET) detection capability (configurable with target set-points) is found to be functional, although it generally tracks the standard DMR implementation with respect to overheads

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    Ultra-Low Power Circuit Design for Cubic-Millimeter Wireless Sensor Platform.

    Full text link
    Modern daily life is surrounded by smaller and smaller computing devices. As Bell’s Law predicts, the research community is now looking at tiny computing platforms and mm3-scale sensor systems are drawing an increasing amount of attention since they can create a whole new computing environment. Designing mm3-scale sensor nodes raises various circuit and system level challenges and we have addressed and proposed novel solutions for many of these challenges to create the first complete 1.0mm3 sensor system including a commercial microprocessor. We demonstrate a 1.0mm3 form factor sensor whose modular die-stacked structure allows maximum volume utilization. Low power I2C communication enables inter-layer serial communication without losing compatibility to standard I2C communication protocol. A dual microprocessor enables concurrent computation for the sensor node control and measurement data processing. A multi-modal power management unit allowed energy harvesting from various harvesting sources. An optical communication scheme is provided for initial programming, synchronization and re-programming after recovery from battery discharge. Standby power reduction techniques are investigated and a super cut-off power gating scheme with an ultra-low power charge pump reduces the standby power of logic circuits by 2-19× and memory by 30%. Different approaches for designing low-power memory for mm3-scale sensor nodes are also presented in this work. A dual threshold voltage gain cell eDRAM design achieves the lowest eDRAM retention power and a 7T SRAM design based on hetero-junction tunneling transistors reduces the standby power of SRAM by 9-19× with only 15% area overhead. We have paid special attention to the timer for the mm3-scale sensor systems and propose a multi-stage gate-leakage-based timer to limit the standard deviation of the error in hourly measurement to 196ms and a temperature compensation scheme reduces temperature dependency to 31ppm/°C. These techniques for designing ultra-low power circuits for a mm3-scale sensor enable implementation of a 1.0mm3 sensor node, which can be used as a skeleton for future micro-sensor systems in variety of applications. These microsystems imply the continuation of the Bell’s Law, which also predicts the massive deployment of mm3-scale computing systems and emergence of even smaller and more powerful computing systems in the near future.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91438/1/sori_1.pd

    Robust Design With Increasing Device Variability In Sub-Micron Cmos And Beyond: A Bottom-Up Framework

    Full text link
    My Ph.D. research develops a tiered systematic framework for designing process-independent and variability-tolerant integrated circuits. This bottom-up approach starts from designing self-compensated circuits as accurate building blocks, and moves up to sub-systems with negative feedback loop and full system-level calibration. a. Design methodology for self-compensated circuits My collaborators and I proposed a novel design methodology that offers designers intuitive insights to create new topologies that are self-compensated and intrinsically process-independent without external reference. It is the first systematic approaches to create "correct-by-design" low variation circuits, and can scale beyond sub-micron CMOS nodes and extend to emerging non-silicon nano-devices. We demonstrated this methodology with an addition-based current source in both 180nm and 90nm CMOS that has 2.5x improved process variation and 6.7x improved temperature sensitivity, and a GHz ring oscillator (RO) in 90nm CMOS with 65% reduction in frequency variation and 85ppm/oC temperature sensitivity. Compared to previous designs, our RO exhibits the lowest temperature sensitivity and process variation, while consuming the least amount of power in the GHz range. Another self-compensated low noise amplifiers (LNA) we designed also exhibits 3.5x improvement in both process and temperature variation and enhanced supply voltage regulation. As part of the efforts to improve the accuracy of the building blocks, I also demonstrated experimentally that due to "diversification effect", the upper bound of circuit accuracy can be better than the minimum tolerance of on-chip devices (MOSFET, R, C, and L), which allows circuit designers to achieve better accuracy with less chip area and power consumption. b. Negative feedback loop based sub-system I explored the feasibility of using high-accuracy DC blocks as low-variation "rulers-on-chip" to regulate high-speed high-variation blocks (e.g. GHz oscillators). In this way, the trade-off between speed (which can be translated to power) and variation can be effectively de-coupled. I demonstrated this proposed structure in an integrated GHz ring oscillators that achieve 2.6% frequency accuracy and 5x improved temperature sensitivity in 90nm CMOS. c. Power-efficient system-level calibration To enable full system-level calibration and further reduce power consumption in active feedback loops, I implemented a successive-approximation-based calibration scheme in a tunable GHz VCO for low power impulse radio in 65nm CMOS. Events such as power-up and temperature drifts are monitored by the circuits and used to trigger the need-based frequency calibration. With my proposed scheme and circuitry, the calibration can be performed under 135pJ and the oscillator can operate between 0.8 and 2GHz at merely 40[MICRO SIGN]W, which is ideal for extremely power-and-cost constraint applications such as implantable biomedical device and wireless sensor networks
    corecore