23 research outputs found

    Design of a Cost-Efficient Reconfigurable Pipeline ADC

    Get PDF
    Power budget is very critical in the design of battery-powered implantable biomedical instruments. High speed, high resolution and low power usually cannot be achieved at the same time. Therefore, a tradeoff must be made to compromise every aspect of those features. As the main component of the bioinstrument, high conversion rate, high resolution ADC consumes most of the power. Fortunately, based on the operation modes of the bioinstrument, a reconfigurable ADC can be used to solve this problem. The reconfigurable ADC will operate at 10-bit 40 MSPS for the diagnosis mode and at 8-bit 2.5 MSPS for the monitor mode. The ADC will be completely turned off if no active signal comes from sensors or if an off command is received from the antenna. By turning off the sample hold stage and the first two stages of the pipeline ADC, a significant power saving is achieved. However, the reconfigurable ADC suffers from two drawbacks. First, the leakage signals through the extra off-state switches in the third stage degrade the performance of the data converter. This situation tends to be even worse for high speed and high-resolution applications. An interference elimination technique has been proposed in this work to solve this problem. Simulation results show a significant attenuation of the spurious tones. Moreover, the transistors in the OTA tend to operate in weak inversion region due to the scaling of the bias current. The transistor in subthreshold is very slow due to the small transit frequency. In order to get a better tradeoff between the transconductance efficiency and the transit frequency, reconfigurable OTAs and scalable bias technique are devised to adjust the operating point from weak inversion to moderate inversion. The figure of merit of the reconfigurable ADC is comparable to the previously published conventional pipeline ADCs. For the 10-bit, 40 MSPS mode, the ADC attains a 56.9 dB SNDR for 35.4 mW power consumption. For the 8-bit 2.5 MSPS mode, the ADC attains a 49.2 dB SNDR for 7.9 mW power consumption. The area for the core layout is 1.9 mm2 for a 0.35 micrometer process

    NEGATIVE BIAS TEMPERATURE INSTABILITY STUDIES FOR ANALOG SOC CIRCUITS

    Get PDF
    Negative Bias Temperature Instability (NBTI) is one of the recent reliability issues in sub threshold CMOS circuits. NBTI effect on analog circuits, which require matched device pairs and mismatches, will cause circuit failure. This work is to assess the NBTI effect considering the voltage and the temperature variations. It also provides a working knowledge of NBTI awareness to the circuit design community for reliable design of the SOC analog circuit. There have been numerous studies to date on the NBTI effect to analog circuits. However, other researchers did not study the implication of NBTI stress on analog circuits utilizing bandgap reference circuit. The reliability performance of all matched pair circuits, particularly the bandgap reference, is at the mercy of aging differential. Reliability simulation is mandatory to obtain realistic risk evaluation for circuit design reliability qualification. It is applicable to all circuit aging problems covering both analog and digital. Failure rate varies as a function of voltage and temperature. It is shown that PMOS is the reliabilitysusceptible device and NBTI is the most vital failure mechanism for analog circuit in sub-micrometer CMOS technology. This study provides a complete reliability simulation analysis of the on-die Thermal Sensor and the Digital Analog Converter (DAC) circuits and analyzes the effect of NBTI using reliability simulation tool. In order to check out the robustness of the NBTI-induced SOC circuit design, a bum-in experiment was conducted on the DAC circuits. The NBTI degradation observed in the reliability simulation analysis has given a clue that under a severe stress condition, a massive voltage threshold mismatch of beyond the 2mV limit was recorded. Bum-in experimental result on DAC proves the reliability sensitivity of NBTI to the DAC circuitry

    Error modeling, self-calibration and design of pipelined analog to digital converters

    Get PDF
    Typescript (photocopy).As the field of signal processing accelerates toward the use of high performance digital techniques, there is a growing need for increasingly fast and accurate analog to digital converters. Three highly visible examples of this trend originated in the last decade. The advent of the compact disc revolutionized the way high-fidelity audio is stored, reproduced, recorded and processed. Digital communication links, fiber optic cables and in the near future ISDN networks (Integrated Services Digital Network) are steadily replacing major portions of telephone systems. Finally, video-conferencing, multi-media computing and currently emerging high definition television (HDTV) systems rely more and more on real-time digital data compression and image enhancing techniques. All these applications rely on analog to digital conversion. In the field of digital audio, the required conversion accuracy is high, but the conversion speed limited (16 bits, 2 x 20 kHz signal bandwidth). In the field of image processing, the required accuracy is less, but the data conversion speed high (8-10 bits, 5-20MHz bandwidth). New applications keep pushing for increasing conversion rates and simultaneously higher accuracies. This dissertation discusses new analog to digital converter architectures that could accomplish this. As a consequence of the trend towards digital processing, prominent analog designers throughout the world have engaged in very active research on the topic of data conversion. Unfortunately, literature has not always kept up. At the time of this writing, it seemed rather difficult to find detailed fundamental publications about analog to digital converter design. This dissertation represents a modest attempt to remedy this situation. It is hoped that anyone with a back-ground in analog design could go through this work and pick up the fundamentals of converter operation, as well as a number of more advanced design techniques

    A built-in self-test technique for high speed analog-to-digital converters

    Get PDF
    Fundação para a Ciência e a Tecnologia (FCT) - PhD grant (SFRH/BD/62568/2009

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    Low Power Signal Processing Techniques for Radiometric Partial Discharge Detection in Wireless Sensor Network.

    Get PDF
    High-voltage infrastructure condition monitoring and diagnostics is essential for the continuous and uninterrupted supply of electricity to the grid. A common metric for establishing HV plant equipment condition is the identification and subsequent monitoring of insulation faults known as partial discharge. Traditional techniques for this require physical connection of sensors to plant equipment to be monitored, leading to potentially high cost and system complexity. A non-invasive and significantly less costly approach is the use of wireless receivers to measure the electromagnetic waves produced by partial discharge faults. Hence, a large-scale monitoring network can be realized to monitor equipment within a substation compound at relatively low cost. An issue with the use of commercially available wireless sensor technology, based around high-speed data conversion, is ensuring each node is low cost and low-power, whilst still being capable of detecting and measuring partial discharge faults within a reasonable distance. In order to reduce power consumption two signal processing techniques are proposed, a transistor-reset integrator and a gated high-speed analogue-to-digital converter, which provide a low power method of radiometric partial discharge measurement by avoiding continuous high-speed sampling. The measured results show that the transistor-reset integrator based system is capable of measuring partial discharge over a distance of 10 m with an error within 1 m, and performing location estimation to within 0.1 m as compared to estimation performed using high-speed sampling, at fraction of the power consumption. The folding ADC based system is able to sample a PD like signal, but requires additional development to improve performance and fully integrate it into a system. The overall results prove the operating principle of the partial discharge monitoring system, which has the potential to be developed into a viable solution

    Nonlinear models and algorithms for RF systems digital calibration

    Get PDF
    Focusing on the receiving side of a communication system, the current trend in pushing the digital domain ever more closer to the antenna sets heavy constraints on the accuracy and linearity of the analog front-end and the conversion devices. Moreover, mixed-signal implementations of Systems-on-Chip using nanoscale CMOS processes result in an overall poorer analog performance and a reduced yield. To cope with the impairments of the low performance analog section in this "dirty RF" scenario, two solutions exist: designing more complex analog processing architectures or to identify the errors and correct them in the digital domain using DSP algorithms. In the latter, constraints in the analog circuits' precision can be offloaded to a digital signal processor. This thesis aims at the development of a methodology for the analysis, the modeling and the compensation of the analog impairments arising in different stages of a receiving chain using digital calibration techniques. Both single and multiple channel architectures are addressed exploiting the capability of the calibration algorithm to homogenize all the channels' responses of a multi-channel system in addition to the compensation of nonlinearities in each response. The systems targeted for the application of digital post compensation are a pipeline ADC, a digital-IF sub-sampling receiver and a 4-channel TI-ADC. The research focuses on post distortion methods using nonlinear dynamic models to approximate the post-inverse of the nonlinear system and to correct the distortions arising from static and dynamic errors. Volterra model is used due to its general approximation capabilities for the compensation of nonlinear systems with memory. Digital calibration is applied to a Sample and Hold and to a pipeline ADC simulated in the 45nm process, demonstrating high linearity improvement even with incomplete settling errors enabling the use of faster clock speeds. An extended model based on the baseband Volterra series is proposed and applied to the compensation of a digital-IF sub-sampling receiver. This architecture envisages frequency selectivity carried out at IF by an active band-pass CMOS filter causing in-band and out-of-band nonlinear distortions. The improved performance of the proposed model is demonstrated with circuital simulations of a 10th-order band pass filter, realized using a five-stage Gm-C Biquad cascade, and validated using out-of-sample sinusoidal and QAM signals. The same technique is extended to an array receiver with mismatched channels' responses showing that digital calibration can compensate the loss of directivity and enhance the overall system SFDR. An iterative backward pruning is applied to the Volterra models showing that complexity can be reduced without impacting linearity, obtaining state-of-the-art accuracy/complexity performance. Calibration of Time-Interleaved ADCs, widely used in RF-to-digital wideband receivers, is carried out developing ad hoc models because the steep discontinuities generated by the imperfect canceling of aliasing would require a huge number of terms in a polynomial approximation. A closed-form solution is derived for a 4-channel TI-ADC affected by gain errors and timing skews solving the perfect reconstruction equations. A background calibration technique is presented based on cyclo-stationary filter banks architecture. Convergence speed and accuracy of the recursive algorithm are discussed and complexity reduction techniques are applied

    Contribution à l'étude et à la réalisation d'un générateur de signaux radiofréquences analogiques pour la radio logicielle intégrale

    Get PDF
    The increasing density of wireless devices and the associated communication flowssharing the same air interface will require a smart and agile use of frequency resources. Thisthesis proposes a flexible, low cost and low power disruptive transmitter architecture. It usesa differentiating coding scheme which leverages a mathematical and technological reduction ofthe energy cost of information conversion. The design of a DAC suited to this architecture isdeveloped and its performances are assessed toward RF signal generation. The measurementsof a demonstrator designed in 65 nm CMOS technology bring a proof of concept.Une utilisation intelligente de l’espace Hertzien sera nécessaire pour permettre aunombre croissant d’objets sans-fil connectés de communiquer dans le même espace de propagation.Ces travaux de thèse proposent une architecture d’émetteur radiofréquence flexible, faiblecoût et faible consommation, en rupture avec les techniques conventionnelles. Cet émetteur estfondé sur un encodage de la dérivée du signal à générer, ce qui permet de réduire le coût énergétiquede la conversion de l’information. Un convertisseur numérique analogique compatibleavec cette architecture est présenté et ses performances sont évaluées dans le cadre de la générationde signaux radiofréquence. Les résultats de mesures obtenus avec un prototype réalisé entechnologie CMOS 65 nm apporte la preuve du concept

    Contribution à l’étude et la réalisation d’un générateur de signaux radiofréquences analogiques pour la radio logicielle intégrale

    Get PDF
    The increasing density of wireless devices and the associated communication flows sharing the same air interface will require a smart and agile use of frequency resources. This thesis proposes a flexible, low cost and low power disruptive transmitter architecture. It uses a differentiating coding scheme which leverages a mathematical and technological reduction of the energy cost of information conversion. The design of a DAC suited to this architecture is developed and its performances are assessed toward RF signal generation. The measurements of a demonstrator designed in 65 nm CMOS technology bring a proof of concept.Une utilisation intelligente de l’espace Hertzien sera nécessaire pour permettre au nombre croissant d’objets sans-fil connectés de communiquer dans le même espace de propagation. Ces travaux de thèse proposent une architecture d’émetteur radiofréquence flexible, faible coût et faible consommation, en rupture avec les techniques conventionnelles. Cet émetteur est fondé sur un encodage de la dérivée du signal à générer, ce qui permet de réduire le coût énergétique de la conversion de l’information. Un convertisseur numérique analogique compatible avec cette architecture est présenté et ses performances sont évaluées dans le cadre de la génération de signaux radiofréquence. Les résultats de mesures obtenus avec un prototype réalisé en technologie CMOS 65 nm apporte la preuve du concept
    corecore