867 research outputs found

    Systematic Figure of Merit Computation for the Design of Pipeline ADC

    Get PDF
    Submitted on behalf of EDAA (http://www.edaa.com/)International audienceThe emerging concept of SoC-AMS leads to research new top-down methodologies to aid systems designers in sizing analog and mixed devices. This work applies this idea to the high-level optimization of pipeline ADC. Considering a given technology, it consists in comparing different configurations according to their imperfections and their architectures without FFT computation or time-consuming simulations. The final selection is based on a figure of merit

    Design methodology for low-jitter differential clock recovery circuits in high performance ADCs

    Get PDF
    This paper presents a design methodology for the simultaneous optimization of jitter and power consumption in ultra-low jitter clock recovery circuits (<100fsrms) for high-performance ADCs. The key ideas of the design methodology are: a) a smart parameterization of transistor sizes to have smooth dependence of specifications on the design variables, b) based on this parameterization, carrying out a design space sub-sampling which allows capturing the whole circuit performance for reducing computation resources and time during optimization. The proposed methodology, which can easily incorporate process voltage and temperature (PVT) variations, has been used to perform a systematic design space exploration that provides sub-100fs jitter clock recovery circuits in two CMOS commercial processes at different technological nodes (1.8V 0.18μm and 1.2V 90nm). Post-layout simulation results for a case of study with typical jitter of 68fs for a 1.8V 80dB-SNDR 100Msps Pipeline ADC application are also shown as demonstrator.Gobierno de España TEC2015-68448-REuropean Space Agency 4000108445-13-NL-R

    Data Streams from the Low Frequency Instrument On-Board the Planck Satellite: Statistical Analysis and Compression Efficiency

    Get PDF
    The expected data rate produced by the Low Frequency Instrument (LFI) planned to fly on the ESA Planck mission in 2007, is over a factor 8 larger than the bandwidth allowed by the spacecraft transmission system to download the LFI data. We discuss the application of lossless compression to Planck/LFI data streams in order to reduce the overall data flow. We perform both theoretical analysis and experimental tests using realistically simulated data streams in order to fix the statistical properties of the signal and the maximal compression rate allowed by several lossless compression algorithms. We studied the influence of signal composition and of acquisition parameters on the compression rate Cr and develop a semiempirical formalism to account for it. The best performing compressor tested up to now is the arithmetic compression of order 1, designed for optimizing the compression of white noise like signals, which allows an overall compression rate = 2.65 +/- 0.02. We find that such result is not improved by other lossless compressors, being the signal almost white noise dominated. Lossless compression algorithms alone will not solve the bandwidth problem but needs to be combined with other techniques.Comment: May 3, 2000 release, 61 pages, 6 figures coded as eps, 9 tables (4 included as eps), LaTeX 2.09 + assms4.sty, style file included, submitted for the pubblication on PASP May 3, 200

    Contribución al modelado y diseño de moduladores sigma-delta en tiempo continuo de baja relación de sobremuestreo y bajo consumo de potencia

    Get PDF
    Continuous-Time Sigma-Delta modulators are often employed as analog-to-digital converters. These modulators are an attractive approach to implement high-speed converters in VLSI systems because they have low sensitivity to circuit imperfections compared to other solutions. This work is a contribution to the analysis, modelling and design of high-speed Continuous-Time Sigma-Delta modulators. The resolution and the stability of these modulators are limited by two main factors, excess-loop delay and sampling uncertainty. Both factors, among others, have been carefully analysed and modelled. A new design methodology is also proposed. It can be used to get an optimum high-speed Continuous-Time Sigma-Delta modulator in terms of dynamic range, stability and sensitivity to sampling uncertainty. Based on the proposed design methodology, a software tool that covers the main steps has been developed. The methodology has been proved by using the tool in designing a 30 Megabits-per-second Continuous-Time Sigma-Delta modulator with 11-bits of dynamic range. The modulator has been integrated in a 0.13-µm CMOS technology and it has a measured peak SNR of 62.5dB

    Planck 2013 results. II. Low Frequency Instrument data processing

    Get PDF
    We describe the data processing pipeline of the Planck Low Frequency Instrument (LFI) data processing centre (DPC) to create and characterize full-sky maps based on the first 15.5 months of operations at 30, 44, and 70 GHz. In particular, we discuss the various steps involved in reducing the data, from telemetry packets through to the production of cleaned, calibrated timelines and calibrated frequency maps. Data are continuously calibrated using the modulation induced on the mean temperature of the cosmic microwave background radiation by the proper motion of the spacecraft. Sky signals other than the dipole are removed by an iterative procedure based on simultaneous fitting of calibration parameters and sky maps. Noise properties are estimated from time-ordered data after the sky signal has been removed, using a generalized least squares map-making algorithm. A destriping code (Madam) is employed to combine radiometric data and pointing information into sky maps, minimizing the variance of correlated noise. Noise covariance matrices, required to compute statistical uncertainties on LFI and Planck products, are also produced. Main beams are estimated down to the ≈−20 dB level using Jupiter transits, which are also used for the geometrical calibration of the focal plane

    Planck 2015 results:II. Low Frequency Instrument data processings

    Get PDF
    We present an updated description of the Planck Low Frequency Instrument (LFI) data processing pipeline, associated with the 2015 data release. We point out the places where our results and methods have remained unchanged since the 2013 paper and we highlight the changes made for the 2015 release, describing the products (especially timelines) and the ways in which they were obtained. We demonstrate that the pipeline is self-consistent (principally based on simulations) and report all null tests. For the first time, we present LFI maps in Stokes Q and U polarization. We refer to other related papers where more detailed descriptions of the LFI data processing pipeline may be found if needed
    corecore