180 research outputs found

    Architectural Improvements Towards an Efficient 16-18 Bit 100-200 MSPS ADC

    Get PDF
    As Data conversion systems continue to improve in speed and resolution, increasing demands are placed on the performance of high-speed Analog to Digital Conversion systems. This work makes a survey about all these and proposes a suitable architecture in order to achieve the desired specifications of 100-200MS/s with 16-18 bit of resolution. The main architecture is based on paralleled structures in order to achieve high sampling rate and at the same time high resolution. In order to solve problems related to Time-interleaved architectures, an advanced randomization method was introduced. It combines randomization and spectral shaping of mismatches. With a simple low-pass filter the method can, compared to conventional randomization algorithms, improve the SFDR as well as the SINAD. The main advantage of this technique over previous ones is that, because the algorithm only need that ADCs are ordered basing on their time mismatches, the absolute accuracy of the mismatch identification method does not matter and, therefore, the requirements on the timing mismatch identification are very low. In addition to that, this correction system uses very simple algorithms able to correct not only for time but also for gain and offset mismatches

    A 16-b 10Msample/s Split-Interleaved Analog to Digital Converter

    Get PDF
    This work describes the integrated circuit design of a 16-bit, 10Msample/sec, combination ‘split’ interleaved analog to digital converter. Time interleaving of analog to digital converters has been used successfully for many years as a technique to achieve faster speeds using multiple identical converters. However, efforts to achieve higher resolutions with this technique have been difficult due to the precise matching required of the converter channels. The most troublesome errors in these types of converters are gain, offset and timing differences between channels. The ‘split ADC’ is a new concept that allows the use of a deterministic, digital, self calibrating algorithm. In this approach, an ADC is split into two paths, producing two output codes from the same input sample. The difference of these two codes is used as the calibration signal for an LMS error estimation algorithm that drives the difference error to zero. The ADC is calibrated when the codes are equal and the output is taken as the average of the two codes. The ‘split’ ADC concept and interleaved architecture are combined in this IC design to form the core of a high speed, high resolution, and self-calibrating ADC system. The dual outputs are used to drive a digital calibration engine to correct for the channel mismatch errors. This system has the speed benefits of interleaving while maintaining high resolution. The hardware for the algorithm as well as the ADC can be implemented in a standard 0.25um CMOS process, resulting in a relatively inexpensive solution. This work is supported by grants from Analog Devices Incorporated (ADI) and the National Science Foundation (NSF)

    A Novel Iterative Structure for Online Calibration of M-Channel Time-Interleaved ADCs

    Get PDF
    published_or_final_versio

    Analog‐to‐Digital Conversion for Cognitive Radio: Subsampling, Interleaving, and Compressive Sensing

    Get PDF
    This chapter explores different analog-to-digital conversion techniques that are suitable to be implemented in cognitive radio receivers. This chapter details the fundamentals, advantages, and drawbacks of three promising techniques: subsampling, interleaving, and compressive sensing. Due to their major maturity, subsampling- and interleaving-based systems are described in further detail, whereas compressive sensing-based systems are described as a complement of the previous techniques for underutilized spectrum applications. The feasibility of these techniques as part of software-defined radio, multistandard, and spectrum sensing receivers is demonstrated by proposing different architectures with reduced complexity at circuit level, depending on the application requirements. Additionally, the chapter proposes different solutions to integrate the advantages of these techniques in a unique analog-to-digital conversion process

    A Statistic-Based Calibration Method for TIADC System

    Get PDF
    Time-interleaved technique is widely used to increase the sampling rate of analog-to-digital converter (ADC). However, the channel mismatches degrade the performance of time-interleaved ADC (TIADC). Therefore, a statistic-based calibration method for TIADC is proposed in this paper. The average value of sampling points is utilized to calculate offset error, and the summation of sampling points is used to calculate gain error. After offset and gain error are obtained, they are calibrated by offset and gain adjustment elements in ADC. Timing skew is calibrated by an iterative method. The product of sampling points of two adjacent subchannels is used as a metric for calibration. The proposed method is employed to calibrate mismatches in a four-channel 5 GS/s TIADC system. Simulation results show that the proposed method can estimate mismatches accurately in a wide frequency range. It is also proved that an accurate estimation can be obtained even if the signal noise ratio (SNR) of input signal is 20 dB. Furthermore, the results obtained from a real four-channel 5 GS/s TIADC system demonstrate the effectiveness of the proposed method. We can see that the spectra spurs due to mismatches have been effectively eliminated after calibration

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    New iterative framework for frequency response mismatch correction in time-interleaved ADCs: Design and performance analysis

    Get PDF
    This paper proposes a new iterative framework for the correction of frequency response mismatch in time-interleaved analog-to-digital converters. Based on a general time-varying linear system model for the mismatch, we treat the reconstruction problem as a linear inverse problem and establish a flexible iterative framework for practical implementation. It encumbrances a number of efficient iterative correction algorithms and simplifies their design, implementation, and performance analysis. In particular, an efficient Gauss-Seidel iteration is studied in detail to illustrate how the correction problem can be solved iteratively and how the proposed structure can be efficiently implemented using Farrow-based variable digital filters with few general-purpose multipliers. We also study important issues, such as the sufficient convergence condition and reconstructed signal spectrum, derive new lower bound of signal-to-distortion-and-noise ratio in order to ensure stable operation, and predict the performance of the proposed structure. Furthermore, we propose an extended iterative structure, which is able to cope with systems involving more than one type of mismatches. Finally, the theoretical results and the effectiveness of the proposed approach are validated by means of computer simulations. © 2011 IEEE.published_or_final_versio

    Iterative correction of frequency response mismatches in time-interleaved ADCs: A novel framework and case study in OFDM systems

    Get PDF
    In this paper, we study a versatile iterative framework for the correction of frequency response mismatch in time-interleaved ADCs. Based on a general time varying linear system model, we establish a flexible iterative framework, which enables the development of various efficient iterative correction algorithms. In particular, we study the Gauss-Seidel iteration in detail to illustrate how the correction problem can be solved iteratively, and show that the iterative structure can be efficiently implemented using Farrow-based variable digital filters with few general-purpose multipliers. Simulation results show that the proposed iterative structure performs better than conventional compensation structures. Moreover, a preliminary study on the BER performance of OFDM systems due to TI ADC mismatch is conducted. © 2010 IEEE.published_or_final_versionThe 1st International Conference on Green Circuits and Systems (ICGCS 2010), Shanghai, China, 21-23 June 2010. In Proceedings of the 1st ICGCS, 2010, p. 253-25
    corecore