469 research outputs found
An Evaluation of the S2Ia switched-current architecture for (delta)(sigma) modulator ADCs
Switched-Current (SI) is a design methodology by which discrete time, current mode, analog circuits can be implemented using standard digital CMOS processes, allowing the addition of analog signal processing circuits, analog to digital converters (ADCs), digital to analog converters (DACs) and other analog and mixed-signal circuits to otherwise digital only microchips without the need and expense of any extra fabrication steps. SI circuits operate by employing a secondary effect in CMOS circuits, a transistor\u27s gate capacitance, to store charge and thus form a current memory cell. A current memory cell is one of the basic building blocks found in most SI circuits and is usually the distinguishing feature of the various approaches to SI circuit design. Delta Sigma Modulators (DSMs) are discrete time, mixed-signal circuits making them well suited to implementation using the SI methodology. These circuits can form the basis of either an ADC or DAC and thus provide a good example of the SI technique employing a particular current memory cell implementation. For this work, a First Order DSM-based ADC was designed and simulated to verify the feasibility of a variant of the S2I Switched-Current Memory Cell architecture, the S2Ia Switched-Current Memory Cell, in a low-voltage, digital, 0.5/j.m CMOS process. The A D C design was targeted towards voiceband (4kHz bandwidth) applications over which it achieved a 6-bit resolution and separately attained a greater than 80kHz bandwidth. Extension of the First Order DSM employed in this design to a Second Order DSM would increase the resolution to at least 8-bits without sacrificing bandwidth. Although potentially less accurate than the S2I Switched-Current Memory Cell, a S2Ia cell has the advantage of requiring only two clock signals to the S2I cell\u27s four. Further, for cascades of S2Ia cells the number of clock signals remains two while a S2I cell cascade requires six separate clock signals. S2Ia-based circuits therefore require less complex clock generation circuitry and fewer clock lines
Digital background calibration algorithm and its FPGA implementation for timing mismatch correction of time-interleaved ADC
Sample time error can degrade the performance of time-interleaved analog to digital converters (TIADCs). A fully digital background algorithm is presented in this paper to estimate and correct the timing mismatch errors between four interleaved channels, together with its hardware implementation. The proposed algorithm provides low computation burden and high performance. It is based on the simplified representation of the coefficients of the Lagrange interpolator. Simulation results show that it can suppress error tones in all of the Nyquist band. Results show that, for a four-channel TIADC with 10-bit resolution, the proposed algorithm improves the signal to noise and distortion ratio (SNDR) and spurious-free dynamic range (SFDR) by 19.27 dB and 35.2 dB, respectively. This analysis was done for an input signal frequency of 0.09fs. In the case of an input signal frequency of 0.45fs, an improvement by 33.06 dB and 43.14 dB is respectively achieved in SNDR and SFDR. In addition to the simulation, the algorithm was implemented in hardware for real-time evaluation. The low computational burden of the algorithm allowed an FPGA implementation with a low logic resource usage and a high system clock speed (926.95 MHz for four channel algorithm implementation). Thus, the proposed architecture can be used as a post-processing algorithm in host processors for data acquisition systems to improve the performance of TIADC
Multi-band Oversampled Noise Shaping Analog to Digital Conversion
Oversampled noise shaping analog to digital (A/D) converters, which are commonly known as delta-sigma (ΔΣ) converters, have the ability to convert relatively low bandwidth signals with very high resolution. Such converters achieve their high resolution by oversampling, as well as processing the signal and quantization noise with different transfer functions. The signal transfer function (STF) is typically a delay over the signal band while the noise transfer function (NTF) is designed to attenuate quantization noise in the signal band. A side effect of the NTF is an amplification of the noise outside the signal band. Thus, a digital filter subsequently attenuates the out-of-band quantization noise.
The focus of this thesis is the investigation of ΔΣ architectures that increase the bandwidth where high resolution conversion can be achieved. It uses parallel architectures exploiting frequency or time slicing to meet this objective. Frequency slicing involves quantizing different portions of the signal frequency spectrum using several quantizers in parallel and then combining the results of the quantizers to form an overall result. Time slicing involves quantizing various groups of time domain signal samples with different quantizers in parallel and then combining the results of the quantizers to form an overall output.
Several interesting observations can be made from this general perspective of frequency and time slicing. Although the representation of a signal are completely equivalent in time or frequency, the thesis shows that this is not the case for known frequency and time sliced A/D architectures. The performance of such systems under ideal conditions are compared for PCM as well as for ΔΣ A/D converters. A multi-band frequency sliced architecture for delta-sigma conversion is proposed and its performance is included in the above comparison. The architecture uses modulators which realize different NTFs for different portions of the signal band. Each band is converted in parallel. A bank of FIR filters attenuates the out of-band noise for each band and achieves perfect reconstruction of the signal component. A design procedure is provided for the design of the filter bank with reduced computational complexity. The use of complex NTFs in the multi-band ΔΣ architecture is also proposed. The peformance of real and complex NTFs is compared. Performance evaluations are made for ideal systems as well as systems suffering from circuit implementation imperfections such as finite opamp gain and mismatched capacitor ratios
K-Delta-1-Sigma Modulators for Wideband Analog-to-Digital Conversion
As CMOS technology scales, the transistor speed increases enabling higher speed communications and more complex systems. These benefits come at the cost of decreasing inherent device gain, increased transistor leakage currents, and additional mismatches due to process variations. All of these drawbacks affect the design of high-resolution analog-to-digital converters (ADCs) in nano-CMOS processes. To move towards an ADC topology useful in these small processes a first-order K-Delta-1-Sigma (KD1S) modulator-based ADC was proposed. The KD1S topology employs inherent time-interleaving with a shared integrator and K-quantizing feedback paths and can potentially achieve significantly higher conversion bandwidths when compared to the traditional switched-capacitor delta-sigma ADCs. The shared integrator in the KD1S modulator settles over a half the clock period and the op-amp is designed to operate at the base clock frequency.
In this dissertation, the first-order KD1S modulator topology is analyzed for the effects of the non-idealities introduced by the K-path operation of the switched-capacitor integrator. Then, the concept of KD1S modulator is extended to higher-order modulators in order to achieve superior noise-shaping performance. A systematic synthesis method has been developed to design and simulate higher-order KD1S modulators at the system level. In order to demonstrate the developed theory, a prototype second-order KD1S modulator has been designed and fabricated in a 500-nm CMOS technology. The second-order KD1S modulator exhibits wideband noise-shaping with an SNDR of 42.7 dB or 6.81 bits in resolution for Kpath = 8 paths, an effective sampling rate of ƒs,new=800 MHz, effective oversampling ratio Kpath•OSR=64 and a signal bandwidth of 6.25 MHz. The second-order KD1S modulator consumes an average current of 3.0 mA from the 5 V supply and occupies an area of 0.55 mm2
Python modelling of the SX1257 transceiver
The main purpose of this project is to create a python model of the SX1257 transceiver by Semtech and everything
has been prepared in order to be able to compare the model with the real component in the near future. The ultimate
goal is to have a simulation model that can be used to develop FPGA SDR designs, for example by cosimulating
the HDL code with the python model using the cocotb cosimulation framework.Universidad de Sevilla. Grado en IngenierÃa de las TecnologÃas de Telecomunicación
Front-ends para LiDAR baseados em ADC e TDC
Autonomous vehicles are a promising technology to save over a million lives each
year that are lost in road accidents. However, bringing safe autonomous vehicles
to market requires massive development, starting with vision sensors. LiDAR is a
fundamental vision sensor for autonomous vehicles, as it enables high resolution
3D vision. However, automotive LiDAR is not yet a mature technology, and, also
requires massive development in many aspects.
This thesis aims to contribute to the maturity of LiDAR, focusing on sampling
architectures for LiDAR front-ends. Two architectures were developed.
The first is based on a pipelined ADC, available from an AD-FMCDAQ2-EBZ
board. The ADC is synchronized with the emitted pulse and able to sample
at 1 Gsample/s. The second architecture is based on a TDC that is directly
implemented in an FPGA. It relies on a tapped delay line topology comprising 45
delay elements and on a mux-based decoder, resulting in a resolution of 50 ps.
Preliminary test results show that both implementations operate correctly,
and are both suitable for sampling short pulses typically used by LiDARs. When
comparing both architectures, we conclude that an ADC consumes a significant
amount of power, and uses many FPGA resources. However, it samples the LiDAR
waveform without any loss of information, therefore enabling maximum range and
precision. The TDC is just the opposite: it consumes little power, and uses less
FPGA resources. However, it only captures one sample per pulse.Os veÃculos autónomos são uma tecnologia promissora para salvar mais de um
milhão de vidas por ano, colhidas por acidentes rodoviários. Contudo, colocar
veÃculos autónomos seguros no mercado requer inúmeros desenvolvimentos, a
começar por sensores de visão. O LiDAR é um sensor de visão fundamental para
veÃculos autónomos, pois permite uma visão 3D de alta resolução. Contudo, o
LiDAR automotivo não é uma tecnologia madura, e portanto requer também
desenvolvimento em vários aspectos.
Esta dissertação visa contribuir para a maturidade do LiDAR, com foco em
arquiteturas de amostragem para front-ends de LiDAR. Foram desenvolvidas duas
arquiteturas. A primeira assenta numa ADC pipelined, por sua vez implementada
numa placa de teste AD-FMCDAQ2-EBZ. A ADC opera em sincronismo com
o pulso emitido, e permite capturar amostras a 1 Gsample/s. A segunda
arquitetura assenta num TDC implementado diretamente numa FPGA. O
TDC baseia-se numa topologia tapped delay line com 45 linhas de atraso, e num
descodificador à base de multiplexers, permitindo uma resolução temporal de 50 ps.
Resultados preliminares mostram que ambas as implementações operam
corretamente, e são adequadas para amostrar pulsos curtos tipicamente associados
a LiDAR. Em termos comparativos, a arquitectura com base numa ADC tem
um consumo de potência considerável e requer uma quantidade significativa
de recursos da FPGA. Contudo, esta permite amostrar a forma de onda de
LiDAR sem nenhuma perda de informação, permitindo assim alcance e precisão
máximos. A arquitectura com base num TDC é exatamente o oposto: tem um
baixo consumo de potência e requer poucos recursos da FPGA. Contudo, permite
capturar apenas uma amostra por pulso.Mestrado em Engenharia Eletrónica e Telecomunicaçõe
- …