22 research outputs found

    Low-power high-performance SAR ADC with redundancy and digital background calibration

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 195-199).As technology scales, the improved speed and energy eciency make the successive- approximation-register (SAR) architecture an attractive alternative for applications that require high-speed and high-accuracy analog-to-digital converters (ADCs). In SAR ADCs, the key linearity and speed limiting factors are capacitor mismatch and incomplete digital-to-analog converter (DAC)/reference voltage settling. In this the- sis, a sub-radix-2 SAR ADC is presented with several new contributions. The main contributions include investigation of using digital error correction (redundancy) in SAR ADCs for dynamic error correction and speed improvement, development of two new calibration algorithms to digitally correct for manufacturing mismatches, design of new architecture to incorporate redundancy within the architecture itself while achieving 94% better energy eciency compared to conventional switching algorithm, development of a new capacitor DAC structure to improve the SNR by four times with improved matching, joint design of the analog and digital circuits to create an asynchronous platform in order to reach the targeted performance, and analysis of key circuit blocks to enable the design to meet noise, power and timing requirements. The design is fabricated in standard 1P9M 65nm CMOS technology with 1.2V supply. The active die area is 0.083mm² with full rail-to-rail input swing of 2.4V p-p . A 67.4dB SNDR, 78.1dB SFDR, +1.0/-0.9 LSB₁₂ INL and +0.5/-0.7 LSB₁₂ DNL are achieved at 50MS/s at Nyquist rate. The total power consumption, including the estimated calibration and reference power, is 2.1mW, corresponding to 21.9fJ/conv.- step FoM. This ADC achieves the best FoM of any ADCs with greater than 10b ENOB and 10MS/s sampling rate.by Albert Hsu Ting Chang.Ph.D

    DESIGN OF LOW-POWER LOW-VOLTAGE SUCCESSIVE-APPROXIMATION ANALOG-TO-DIGITAL CONVERTERS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    New Views for Stochastic Computing: From Time-Encoding to Deterministic Processing

    Get PDF
    University of Minnesota Ph.D. dissertation.July 2018. Major: Electrical/Computer Engineering. Advisor: David Lilja. 1 computer file (PDF); xi, 149 pages.Stochastic computing (SC), a paradigm first introduced in the 1960s, has received considerable attention in recent years as a potential paradigm for emerging technologies and ''post-CMOS'' computing. Logical computation is performed on random bitstreams where the signal value is encoded by the probability of obtaining a one versus a zero. This unconventional representation of data offers some intriguing advantages over conventional weighted binary. Implementing complex functions with simple hardware (e.g., multiplication using a single AND gate), tolerating soft errors (i.e., bit flips), and progressive precision are the primary advantages of SC. The obvious disadvantage, however, is latency. A stochastic representation is exponentially longer than conventional binary radix. Long latencies translate into high energy consumption, often higher than that of their binary counterpart. Generating bit streams is also costly. Factoring in the cost of the bit-stream generators, the overall hardware cost of an SC implementation is often comparable to a conventional binary implementation. This dissertation begins by proposing a highly unorthodox idea: performing computation with digital constructs on time-encoded analog signals. We introduce a new, energy-efficient, high-performance, and much less costly approach for SC using time-encoded pulse signals. We explore the design and implementation of arithmetic operations on time-encoded data and discuss the advantages, challenges, and potential applications. Experimental results on image processing applications show up to 99% performance speedup, 98% saving in energy dissipation, and 40% area reduction compared to prior stochastic implementations. We further introduce a low-cost approach for synthesizing sorting network circuits based on deterministic unary bit-streams. Synthesis results show more than 90% area and power savings compared to the costs of the conventional binary implementation. Time-based encoding of data is then exploited for fast and energy-efficient processing of data with the developed sorting circuits. Poor progressive precision is the main challenge with the recently developed deterministic methods of SC. We propose a high-quality down-sampling method which significantly improves the processing time and the energy consumption of these deterministic methods by pseudo-randomizing bitstreams. We also propose two novel deterministic methods of processing bitstreams by using low-discrepancy sequences. We further introduce a new advantage to SC paradigm-the skew tolerance of SC circuits. We exploit this advantage in developing polysynchronous clocking, a design strategy for optimizing the clock distribution network of SC systems. Finally, as the first study of its kind to the best of our knowledge, we rethink the memory system design for SC. We propose a seamless stochastic system, StochMem, which features analog memory to trade the energy and area overhead of data conversion for computation accuracy

    Circumventing the fuzzy type reduction for autonomous vehicle controller

    Get PDF
    Fuzzy type-2 controllers can easily deal with systems nonlinearity and utilise humans’ expertise to solve many complex control problems; they are also very good at processing uncertainty, which exists in many robotic systems, such as autonomous vehicles. However, their computational cost is high, especially at the type reduction stage. In this research, it is aimed to reduce the computation cost of the type reduction stage, thus to facilitate faster performance speed and increase the number of actions able to be operated in one microprocessor. Proposed here are adaptive integration principles with a binary successive search technique to locate the straight or semi-straight segments of a fuzzy set, thus to use them in achieving faster weighted average computation. This computation is very important because it runs frequently in many type reductions. A variable adaptation rate is suggested during the type reduction iterations to reduce the computation cost further. The influence of the proposed approaches on the fuzzy type-2 controller’s error has been mathematically analysed and then experimentally measured using a wall-following behaviour, which is the most important action for many autonomous vehicles. The resultant execution time-gain of the proposed technique has reached to 200%. This evaluated with respect to the execution time of the original, unmodified, type reduction procedure. This study develops a new accelerated version of the enhanced Karnik-Mendel type reducer by using better initialisations and better indexing scheme. The resulting performance time-gain reached 170%, with respect to the original version. A further cut in the type reduction time is achieved by proposing a One-Go type reduction procedure. This technique can reduce multiple sets altogether in one pass, thus eliminating much of the redundant calculations needed to carry out the reduction individually. All the proposed type reduction enhancements were evaluated in terms of their execution time-gain and performance error using every possible fuzzy firing level combination. Tests were then performed using a real autonomous vehicle, navigates in a relatively complex arena field with acute, right, obtuse, and reflex angled corners, to assure evaluating wide variety of operation conditions. A simplified state hold technique using Schmitt-trigger principles and dynamic sense pattern control was suggested and implemented to assure small rule base size and to obtain more accurate evaluation of the type reduction stages
    corecore