806 research outputs found

    Equalization-Based Digital Background Calibration Technique for Pipelined ADCs

    Get PDF
    In this paper, we present a digital background calibration technique for pipelined analog-to-digital converters (ADCs). In this scheme, the capacitor mismatch, residue gain error, and amplifier nonlinearity are measured and then corrected in digital domain. It is based on the error estimation with nonprecision calibration signals in foreground mode, and an adaptive linear prediction structure is used to convert the foreground scheme to the background one. The proposed foreground technique utilizes the LMS algorithm to estimate the error coefficients without needing high-accuracy calibration signals. Several simulation results in the context of a 12-b 100-MS/s pipelined ADC are provided to verify the usefulness of the proposed calibration technique. Circuit-level simulation results show that the ADC achieves 28-dB signal-to-noise and distortion ratio and 41-dB spurious-free dynamic range improvement, respectively, compared with the noncalibrated ADC

    Transistor-Level Synthesis of Pipeline Analog-to-Digital Converters Using a Design-Space Reduction Algorithm

    Get PDF
    A novel transistor-level synthesis procedure for pipeline ADCs is presented. This procedure is able to directly map high-level converter specifications onto transistor sizes and biasing conditions. It is based on the combination of behavioral models for performance evaluation, optimization routines to minimize the power and area consumption of the circuit solution, and an algorithm to efficiently constraint the converter design space. This algorithm precludes the cost of lengthy bottom-up verifications and speeds up the synthesis task. The approach is herein demonstrated via the design of a 0.13 μm CMOS 10 bits@60 MS/s pipeline ADC with energy consumption per conversion of only 0.54 pJ@1 MHz, making it one of the most energy-efficient 10-bit video-rate pipeline ADCs reported to date. The computational cost of this design is of only 25 min of CPU time, and includes the evaluation of 13 different pipeline architectures potentially feasible for the targeted specifications. The optimum design derived from the synthesis procedure has been fine tuned to support PVT variations, laid out together with other auxiliary blocks, and fabricated. The experimental results show a power consumption of 23 [email protected] V and an effective resolution of 9.47-bit@1 MHz. Bearing in mind that no specific power reduction strategy has been applied; the mentioned results confirm the reliability of the proposed approach.Ministerio de Ciencia e Innovación TEC2009-08447Junta de Andalucía TIC-0281

    A Simple Technique for Fast Digital Background Calibration of A/D Converters

    Get PDF
    A modification of the background digital calibration procedure for A/D converters by Li and Moon is proposed, based on a method to improve the speed of convergence and the accuracy of the calibration. The procedure exploits a colored random sequence in the calibration algorithm, and can be applied both for narrowband input signals and for baseband signals, with a slight penalty on the analog bandwidth of the converter. By improving the signal-to-calibration-noise ratio of the statistical estimation of the error parameters, our proposed technique can be employed either to improve linearity or to make the calibration procedure faster. A practical method to generate the random sequence with minimum overhead with respect to a simple PRBS is also presented. Simulations have been performed on a 14-bit pipeline A/D converter in which the first 4 stages have been calibrated, showing a 15 dB improvement in THD and SFDR for the same calibration time with respect to the original technique

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    A Highly Digital VCO-Based ADC With Lookup-Table-Based Background Calibration

    Get PDF
    CMOS technology scaling has enabled dramatic improvement for digital circuits both in terms of speed and power efficiency. However, most traditional analog-to-digital converter (ADC) architectures are challenged by ever-decreasing supply voltage. The improvement in time resolution enabled by increased digital speeds drives design towards time-domain architectures such as voltage-controlled-oscillator (VCO) based ADCs. The main challenge in VCO-based ADC design is mitigating the nonlinearity of VCO Voltage-to-frequency (V-to-f) characteristics. Achieving signal-to-noise ratio (SNR) performance better than 40dB requires some form of calibration, which can be realized by analog or digital techniques, or some combination. This dissertation proposes a highly digital, reconfigurable VCO-based ADC with lookup-table (LUT) based background calibration based on split ADC architecture. Each of the two split channels, ADC A and B , contains two VCOs in a differential configuration. This helps alleviate even-order distortions as well as increase the dynamic range. A digital controller on chip can reconfigure the ADCs\u27 sampling rates and resolutions to adapt to various application scenarios. Different types of input signals can be used to train the ADC’s LUT parameters through the simple, anti-aliasing continuous-time input to achieve target resolution. The chip is fabricated in a 180 nm CMOS process, and the active area of analog and digital circuits is 0.09 and 0.16mm^2, respectively. Power consumption of the core ADC function is 25 mW. Measured results for this prototype design with 12-b resolution show ENOB improves from uncorrected 5-b to 11.5-b with calibration time within 200 ms (780K conversions at 5 MSps sample rate)

    Electronics for Sensors

    Get PDF
    The aim of this Special Issue is to explore new advanced solutions in electronic systems and interfaces to be employed in sensors, describing best practices, implementations, and applications. The selected papers in particular concern photomultiplier tubes (PMTs) and silicon photomultipliers (SiPMs) interfaces and applications, techniques for monitoring radiation levels, electronics for biomedical applications, design and applications of time-to-digital converters, interfaces for image sensors, and general-purpose theory and topologies for electronic interfaces

    Digital Background Self-Calibration Technique for Compensating Transition Offsets in Reference-less Flash ADCs

    Get PDF
    This Dissertation focusses on proving that background calibration using adaptive algorithms are low-cost, stable and effective methods for obtaining high accuracy in flash A/D converters. An integrated reference-less 3-bit flash ADC circuit has been successfully designed and taped out in UMC 180 nm CMOS technology in order to prove the efficiency of our proposed background calibration. References for ADC transitions have been virtually implemented built-in in the comparators dynamic-latch topology by a controlled mismatch added to each comparator input front-end. An external very simple DAC block (calibration bank) allows control the quantity of mismatch added in each comparator front-end and, therefore, compensate the offset of its effective transition with respect to the nominal value. In order to assist to the estimation of the offset of the prototype comparators, an auxiliary A/D converter with higher resolution and lower conversion speed than the flash ADC is used: a 6-bit capacitive-DAC SAR type. Special care in synchronization of analogue sampling instant in both ADCs has been taken into account. In this thesis, a criterion to identify the optimum parameters of the flash ADC design with adaptive background calibration has been set. With this criterion, the best choice for dynamic latch architecture, calibration bank resolution and flash ADC resolution are selected. The performance of the calibration algorithm have been tested, providing great programmability to the digital processor that implements the algorithm, allowing to choose the algorithm limits, accuracy and quantization errors in the arithmetic. Further, systematic controlled offset can be forced in the comparators of the flash ADC in order to have a more exhaustive test of calibration
    corecore