518 research outputs found

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    Architectural Improvements Towards an Efficient 16-18 Bit 100-200 MSPS ADC

    Get PDF
    As Data conversion systems continue to improve in speed and resolution, increasing demands are placed on the performance of high-speed Analog to Digital Conversion systems. This work makes a survey about all these and proposes a suitable architecture in order to achieve the desired specifications of 100-200MS/s with 16-18 bit of resolution. The main architecture is based on paralleled structures in order to achieve high sampling rate and at the same time high resolution. In order to solve problems related to Time-interleaved architectures, an advanced randomization method was introduced. It combines randomization and spectral shaping of mismatches. With a simple low-pass filter the method can, compared to conventional randomization algorithms, improve the SFDR as well as the SINAD. The main advantage of this technique over previous ones is that, because the algorithm only need that ADCs are ordered basing on their time mismatches, the absolute accuracy of the mismatch identification method does not matter and, therefore, the requirements on the timing mismatch identification are very low. In addition to that, this correction system uses very simple algorithms able to correct not only for time but also for gain and offset mismatches

    Analog‐to‐Digital Conversion for Cognitive Radio: Subsampling, Interleaving, and Compressive Sensing

    Get PDF
    This chapter explores different analog-to-digital conversion techniques that are suitable to be implemented in cognitive radio receivers. This chapter details the fundamentals, advantages, and drawbacks of three promising techniques: subsampling, interleaving, and compressive sensing. Due to their major maturity, subsampling- and interleaving-based systems are described in further detail, whereas compressive sensing-based systems are described as a complement of the previous techniques for underutilized spectrum applications. The feasibility of these techniques as part of software-defined radio, multistandard, and spectrum sensing receivers is demonstrated by proposing different architectures with reduced complexity at circuit level, depending on the application requirements. Additionally, the chapter proposes different solutions to integrate the advantages of these techniques in a unique analog-to-digital conversion process

    Bi-Linear Homogeneity Enforced Calibration for Pipelined ADCs

    Full text link
    Pipelined analog-to-digital converters (ADCs) are key enablers in many state-of-the-art signal processing systems with high sampling rates. In addition to high sampling rates, such systems often demand a high linearity. To meet these challenging linearity requirements, ADC calibration techniques were heavily investigated throughout the past decades. One limitation in ADC calibration is the need for a precisely known test signal. In our previous work, we proposed the homogeneity enforced calibration (HEC) approach, which circumvents this need by consecutively feeding a test signal and a scaled version of it into the ADC. The calibration itself is performed using only the corresponding output samples, such that the test signal can remain unknown. On the downside, the HEC approach requires the option to accurately scale the test signal, impeding an on-chip implementation. In this work, we provide a thorough analysis of the HEC approach, including the effects of an inaccurately scaled test signal. Furthermore, the bi-linear homogeneity enforced calibration (BL-HEC) approach is introduced and suggested to account for an inaccurate scaling and, therefore, to facilitate an on-chip implementation. In addition, a comprehensive stability and convergence analysis of the BL-HEC approach is carried out. Finally, we verify our concept with simulations.Comment: 12 pages, 5 figure

    Methodology for testing high-performance data converters using low-accuracy instruments

    Get PDF
    There has been explosive growth in the consumer electronics market during the last decade. As the IC industry is shifting from PC-centric to consumer electronics-centric, digital technologies are no longer solving all the problems. Electronic devices integrating mixed-signal, RF and other non-purely digital functions are becoming new challenges to the industry. When digital testing has been studied for long time, testing of analog and mixed-signal circuits is still in its development stage. Existing solutions have two major problems. First, high-performance mixed-signal test equipments are expensive and it is difficult to integrate their functions on chip. Second, it is challenging to improve the test capability of existing methods to keep up with the fast-evolving performance of mixed-signal products demanded on the market. The International Technology Roadmap for Semiconductors identified mixed-signal testing as one of the most daunting system-on-a-chip challenges;My works have been focused on developing new strategies for testing the analog-to-digital converter (ADC) and digital-to-analog converter (DAC). Different from conventional methods that require test instruments to have better performance than the device under test, our algorithms allow the use of medium and low-accuracy instruments in testing. Therefore, we can provide practical and accurate test solutions for high-performance data converters. Meanwhile, the test cost is dramatically reduced because of the low price of such test instruments. These algorithms have the potential for built-in self-test and can be generalized to other mixed-signal circuitries. When incorporated with self-calibration, these algorithms can enable new design techniques for mixed-signal integrated circuits. Following contents are covered in the dissertation:;(1) A general stimulus error identification and removal (SEIR) algorithm that can test high-resolution ADCs using two low-linearity signals with a constant offset in between; (2) A center-symmetric interleaving (CSI) strategy for generating test signals to be used with the SEIR algorithm; (3) An architecture-based test algorithm for high-performance pipelined or cyclic ADCs using a single nonlinear stimulus; (4) Using Kalman Filter to improve the efficiency of ADC testing; and (5) A testing algorithm for high-speed high-resolution DACs using low-resolution ADCs with dithering

    Post Conversion Correction of Non-Linear Mismatches for Time Interleaved Analog-to-Digital Converters

    Get PDF
    Time Interleaved Analog-to-Digital Converters (TI-ADCs) utilize an architecture which enables conversion rates well beyond the capabilities of a single converter while preserving most or all of the other performance characteristics of the converters on which said architecture is based. Most of the approaches discussed here are independent of architecture; some solutions take advantage of specific architectures. Chapter 1 provides the problem formulation and reviews the errors found in ADCs as well as a brief literature review of available TI-ADC error correction solutions. Chapter 2 presents the methods and materials used in implementation as well as extend the state of the art for post conversion correction. Chapter 3 presents the simulation results of this work and Chapter 4 concludes the work. The contribution of this research is three fold: A new behavioral model was developed in SimulinkTM and MATLABTM to model and test linear and nonlinear mismatch errors emulating the performance data of actual converters. The details of this model are presented as well as the results of cumulant statistical calculations of the mismatch errors which is followed by the detailed explanation and performance evaluation of the extension developed in this research effort. Leading post conversion correction methods are presented and an extension with derivations is presented. It is shown that the data converter subsystem architecture developed is capable of realizing better performance of those currently reported in the literature while having a more efficient implementation

    Nonlinear models and algorithms for RF systems digital calibration

    Get PDF
    Focusing on the receiving side of a communication system, the current trend in pushing the digital domain ever more closer to the antenna sets heavy constraints on the accuracy and linearity of the analog front-end and the conversion devices. Moreover, mixed-signal implementations of Systems-on-Chip using nanoscale CMOS processes result in an overall poorer analog performance and a reduced yield. To cope with the impairments of the low performance analog section in this "dirty RF" scenario, two solutions exist: designing more complex analog processing architectures or to identify the errors and correct them in the digital domain using DSP algorithms. In the latter, constraints in the analog circuits' precision can be offloaded to a digital signal processor. This thesis aims at the development of a methodology for the analysis, the modeling and the compensation of the analog impairments arising in different stages of a receiving chain using digital calibration techniques. Both single and multiple channel architectures are addressed exploiting the capability of the calibration algorithm to homogenize all the channels' responses of a multi-channel system in addition to the compensation of nonlinearities in each response. The systems targeted for the application of digital post compensation are a pipeline ADC, a digital-IF sub-sampling receiver and a 4-channel TI-ADC. The research focuses on post distortion methods using nonlinear dynamic models to approximate the post-inverse of the nonlinear system and to correct the distortions arising from static and dynamic errors. Volterra model is used due to its general approximation capabilities for the compensation of nonlinear systems with memory. Digital calibration is applied to a Sample and Hold and to a pipeline ADC simulated in the 45nm process, demonstrating high linearity improvement even with incomplete settling errors enabling the use of faster clock speeds. An extended model based on the baseband Volterra series is proposed and applied to the compensation of a digital-IF sub-sampling receiver. This architecture envisages frequency selectivity carried out at IF by an active band-pass CMOS filter causing in-band and out-of-band nonlinear distortions. The improved performance of the proposed model is demonstrated with circuital simulations of a 10th-order band pass filter, realized using a five-stage Gm-C Biquad cascade, and validated using out-of-sample sinusoidal and QAM signals. The same technique is extended to an array receiver with mismatched channels' responses showing that digital calibration can compensate the loss of directivity and enhance the overall system SFDR. An iterative backward pruning is applied to the Volterra models showing that complexity can be reduced without impacting linearity, obtaining state-of-the-art accuracy/complexity performance. Calibration of Time-Interleaved ADCs, widely used in RF-to-digital wideband receivers, is carried out developing ad hoc models because the steep discontinuities generated by the imperfect canceling of aliasing would require a huge number of terms in a polynomial approximation. A closed-form solution is derived for a 4-channel TI-ADC affected by gain errors and timing skews solving the perfect reconstruction equations. A background calibration technique is presented based on cyclo-stationary filter banks architecture. Convergence speed and accuracy of the recursive algorithm are discussed and complexity reduction techniques are applied
    corecore