3,790 research outputs found

    Xampling: Signal Acquisition and Processing in Union of Subspaces

    Full text link
    We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two. Analog compression that narrows down the input bandwidth prior to sampling with commercial devices. A nonlinear algorithm then detects the input subspace prior to conventional signal processing. A representative union model of spectrally-sparse signals serves as a test-case to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy and software complexities. We conduct a comprehensive comparison between two sub-Nyquist acquisition strategies for spectrally-sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address lowrate signal processing and develop an algorithm for that purpose that enables convenient signal processing at sub-Nyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.Comment: 16 pages, 9 figures, submitted to IEEE for possible publicatio

    Exploring the PowerDAC : an asymmetric multilevel approach for high-precision power amplification

    Get PDF

    Data Conversion Within Energy Constrained Environments

    Get PDF
    Within scientific research, engineering, and consumer electronics, there is a multitude of new discrete sensor-interfaced devices. Maintaining high accuracy in signal quantization while staying within the strict power-budget of these devices is a very challenging problem. Traditional paths to solving this problem include researching more energy-efficient digital topologies as well as digital scaling.;This work offers an alternative path to lower-energy expenditure in the quantization stage --- content-dependent sampling of a signal. Instead of sampling at a constant rate, this work explores techniques which allow sampling based upon features of the signal itself through the use of application-dependent analog processing. This work presents an asynchronous sampling paradigm, based off the use of floating-gate-enabled analog circuitry. The basis of this work is developed through the mathematical models necessary for asynchronous sampling, as well the SPICE-compatible models necessary for simulating floating-gate enabled analog circuitry. These base techniques and circuitry are then extended to systems and applications utilizing novel analog-to-digital converter topologies capable of leveraging the non-constant sampling rates for significant sample and power savings

    EMI characterization for power supplies and machine learning based modeling in EMC/SI

    Get PDF
    Signal integrity (SI) and electromagnetic interference (EMI) are essential for consumer electronics and high-speed server applications. It is necessary to do EMI and SI modeling at the design stage. In this research, several modeling approaches for EMI and SI problems are proposed. By using measurement-based modeling method, the mechanism of conducted emissions (CE) on ac to dc power supplies in an LED TV is analyzed. A system-level transient simulation model is built to predict the conducted emission (CE). To characterize the equivalent dipole moment sources in RFI and EMI problems, a dipole source reconstruction method based on machine learning techniques is proposed. The picture of the electromagnetic field is fed to the convolutional neural network, and the CNN performs a multi-label classification to determine all types of dominant dipole moments. The CNN also generates a class activation map, which indicates the locations of each type of present dipole moment. By using the integer programming method, a PCB stack-up design method is proposed. In high-speed PCB designs, a design with 30 layers or more is not uncommon and there are many logical design constraints that need to be considered. The constraints are converted to mathematical inequalities using the integer programming technique. Then an integer program solver is called to get all possible combinations. The number of possible combinations gets large as the number of layers increases, and the proposed method is much more efficient than brute-force searching. After a nominal design is picked, a searching algorithm based on integer programming is further used to find corner cases by considering the manufacturing variations --Abstract, page iii

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process
    corecore