320 research outputs found
Design and debugging of multi-step analog to digital converters
With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process
Design of high speed folding and interpolating analog-to-digital converter
High-speed and low resolution analog-to-digital converters (ADC) are key elements in
the read channel of optical and magnetic data storage systems. The required resolution is
about 6-7 bits while the sampling rate and effective resolution bandwidth requirements
increase with each generation of storage system. Folding is a technique to reduce the
number of comparators used in the flash architecture. By means of an analog preprocessing
circuit in folding A/D converters the number of comparators can be reduced significantly.
Folding architectures exhibit low power and low latency as well as the ability to run at high
sampling rates. Folding ADCs employing interpolation schemes to generate extra folding
waveforms are called "Folding and Interpolating ADC" (F&I ADC).
The aim of this research is to increase the input bandwidth of high speed conversion, and
low latency F&I ADC. Behavioral models are developed to analyze the bandwidth
limitation at the architecture level. A front-end sample-and-hold unit is employed to tackle
the frequency multiplication problem, which is intrinsic for all F&I ADCs. Current-mode
signal processing is adopted to increase the bandwidth of the folding amplifiers and
interpolators, which are the bottleneck of the whole system. An operational
transconductance amplifier (OTA) based folding amplifier, current mirror-based
interpolator, very low impedance fast current comparator are proposed and designed to
carry out the current-mode signal processing. A new bit synchronization scheme is
proposed to correct the error caused by the delay difference between the coarse and fine
channels.
A prototype chip was designed and fabricated in 0.35μm CMOS process to verify the
ideas. The S/H and F&I ADC prototype is realized in 0.35μm double-poly CMOS process
(only one poly is used). Integral nonlinearity (INL) is 1.0 LSB and Differential nonlinearity
(DNL) is 0.6 LSB at 110 KHz. The ADC occupies 1.2mm2 active area and dissipates
200mW (excluding 70mW of S/H) from 3.3V supply. At 300MSPS sampling rate, the ADC
achieves no less than 6 ENOB with input signal lower than 60MHz. It has the highest input
bandwidth of 60MHz reported in the literature for this type of CMOS ADC with similar
resolution and sample rate
?????? ?????? ???????????? ?????? ???????????? ??????????????? ?????????????????? ??? ???????????????
Department of Electrical EngineeringA Sensor system is advanced along sensor technologies are developed. The performance improvement of sensor system can be expected by using the internet of things (IoT) communication technology and artificial neural network (ANN) for data processing and computation. Sensors or systems exchanged the data through this wireless connectivity, and various systems and applications are possible to implement by utilizing the advanced technologies. And the collected data is computed using by the ANN and the efficiency of system can be also improved.
Gas monitoring system is widely need from the daily life to hazardous workplace. Harmful gas can cause a respiratory disease and some gas include cancer-causing component. Even though it may cause dangerous situation due to explosion. There are various kinds of hazardous gas and its characteristics that effect on human body are different each gas. The optimal design of gas monitoring system is necessary due to each gas has different criteria such as the permissible concentration and exposure time. Therefore, in this thesis, conventional sensor system configuration, operation, and limitation are described and gas monitoring system with wireless connectivity and neural network is proposed to improve the overall efficiency.
As I already mentioned above, dangerous concentration and permissible exposure time are different depending on gas types. During the gas monitoring, gas concentration is lower than a permissible level in most of case. Thus, the gas monitoring is enough with low resolution for saving the power consumption in this situation. When detecting the gas, the high-resolution is required for the accurate concentration detecting. If the gas type is varied in the above situation, the amount of calculation increases exponentially. Therefore, in the conventional systems, target specifications are decided by the highest requirement in the whole situation, and it occurs increasing the cost and complexity of readout integrated circuit (ROIC) and system. In order to optimize the specification, the ANN and adaptive ROIC are utilized to compute the complex situation and huge data processing.
Thus, gas monitoring system with learning-based algorithm is proposed to improve its efficiency. In order to optimize the operation depending on situation, dual-mode ROIC that monitoring mode and precision mode is implemented. If the present gas concentration is decided to safe, monitoring mode is operated with minimal detecting accuracy for saving the power consumption. The precision mode is switched when the high-resolution or hazardous situation are detected. The additional calibration circuits are necessary for the high-resolution implementation, and it has more power consumption and design complexity. A high-resolution Analog-to-digital converter (ADC) is kind of challenges to design with efficiency way. Therefore, in order to reduce the effective resolution of ADC and power consumption, zooming correlated double sampling (CDS) circuit and prediction successive approximation register (SAR) ADC are proposed for performance optimization into precision mode.
A Microelectromechanical systems (MEMS) based gas sensor has high-integration and high sensitivity, but the calibration is needed to improve its low selectivity. Conventionally, principle component analysis (PCA) is used to classify the gas types, but this method has lower accuracy in some case and hard to verify in real-time. Alternatively, ANN is powerful algorithm to accurate sensing through collecting the data and training procedure and it can be verified the gas type and concentration in real-time. ROIC was fabricated in complementary metal-oxide-semiconductor (CMOS) 180-nm process and then the efficiency of the system with adaptive ROIC and ANN algorithm was experimentally verified into gas monitoring system prototype. Also, Bluetooth supports wireless connectivity to PC and mobile and pattern recognition and prediction code for SAR ADC is performed in MATLAB. Real-time gas information is monitored by Android-based application in smartphone. The dual-mode operation, optimization of performance and prediction code are adjusted with microcontroller unit (MCU). Monitoring mode is improved by x2.6 of figure-of-merits (FoM) that compared with previous resistive interface.clos
Design of a Class-AB Amplifier for a 1.5 Bit MDAC of a 12 Bit 100MSPS Pipeline ADC
The basic building block of a pipeline analog-to-digital converter (ADC) is the multiplying digital-to-analog converter (MDAC). The performance of the MDAC significantly depends on the performance of the operational amplifier and calibration techniques. To reduce the complexity of calibration, the operational amplifier needs to have high-linearity, high bandwidth and moderate gain.
In this work, the Op-amp specifications were derived from the pipeline ADC requirements. A novel class-AB bias scheme with feed-forward compensation, which provides high linearity and bandwidth consuming low power is proposed. The advantages of the new topology over Monticelli bias scheme and Miller’s compensated amplifiers is explained. The amplifier is implemented in IBM 130nm technology and the MDAC design is used as a test bench to characterize the Op-amp performance. The proposed architecture performance is compared with class A and class-AB output stage amplifiers with Miller’s compensation reported in literature. The proposed class-AB amplifier with feed forward compensation provides an open loop gain of 47dB, unit gain bandwidth of 1040 MHz and IM3 of 75dB consuming 3.88mA current. The amplifier provides the required linearity and bandwidth at much lower power consumption than the amplifiers using conventional class-AB bias schemes
Electronics for Sensors
The aim of this Special Issue is to explore new advanced solutions in electronic systems and interfaces to be employed in sensors, describing best practices, implementations, and applications. The selected papers in particular concern photomultiplier tubes (PMTs) and silicon photomultipliers (SiPMs) interfaces and applications, techniques for monitoring radiation levels, electronics for biomedical applications, design and applications of time-to-digital converters, interfaces for image sensors, and general-purpose theory and topologies for electronic interfaces
Recommended from our members
Design Techniques for High-Performance SAR A/D Converters
The design of electronics needs to account for the non-ideal characteristics of the device technologies used to realize practical circuits. This is particularly important in mixed analog-digital design since the best device technologies are very different for digital compared to analog circuits. One solution for this problem is to use a calibration correction approach to remove the errors introduced by devices, but this adds complexity and power dissipation, as well as reducing operation speed, and so must be optimised. This thesis addresses such an approach to improve the performance of certain types of analog-to-digital converter (ADC) used in advanced telecommunications, where speed, accuracy and power dissipation currently limit applications. The thesis specifically focuses on the design of compensation circuits for use in successive approximation register (SAR) ADCs.
ADCs are crucial building blocks in communication systems, in general, and for mobile networks, in particular. The recently launched fifth generation of mobile networks (5G) has required new ADC circuit techniques to meet the higher speed and lower power dissipation requirements for 5G technology. The SAR has become one of the most favoured architectures for designing high-performance ADCs, but the successive nature of the circuit operation makes it difficult to reach ∼GS/s sampling rates at reasonable power consumption.
Here, two calibration techniques for high-performance SAR ADCs are presented. The first uses an on-chip stochastic-based mismatch calibration technique that is able to accurately compute and compensate for the mismatch of a capacitive DAC in a SAR ADC. The stochastic nature of the proposed calibration method enables determination of the mismatch of the CAPDAC with a resolution much better than that of the DAC. This allows the unit capacitor to scale down to as low as 280aF for a 9-bit DAC. Since the CAP-DAC causes a large part of the overall dynamic power consumption and directly determines both the sizes of the driving and sampling switches and the size of the input capacitive load of the ADC and the kT/C noise power, a small CAP-DAC helps the power efficiency. To validate the proposed calibration idea, a 10-bit asynchronous SAR ADC was fabricated in 28-nm CMOS. Measurement results show that the proposed stochastic calibration improves the ADC’s SFDR and SNDR by 14.9 dB, 11.5 dB, respectively. After calibration, the fabricated SAR ADC achieves an ENOB of 9.14 bit at a sampling rate of 85 MS/s, resulting in a Walden FoM of 10.9 fJ/c-s.
The second calibration technique is a timing-skew calibration for a time-interleaved (TI) SAR ADC that calibrates/computes the inter-channel timing and offset mismatch simultaneously. Simulation results show the effectiveness of this calibration method. When used together, the proposed mismatch calibration technique and the timing-skew
calibration technique enables a TI SAR ADC to be designed that can achieve a sampling rate of ∼GS/s with 10-bit resolution and a power consumption as low as ∼10mW; specifications that satisfy the requirements of 5G technology
High-Speed Analog-to-Digital Converters for Broadband Applications
Flash Analog-to-Digital Converters (ADCs), targeting optical
communication standards, have been reported in SiGe BiCMOS
technology. CMOS implementation of such designs faces two
challenges. The first is to achieve a high sampling speed, given the
lower gain-bandwidth (lower ft) of CMOS technology. The second
challenge is to handle the wide bandwidth of the input signal with a
certain accuracy. Although the first problem can be relaxed by using
the time-interleaved architecture, the second problem remains as a
main obstacle to CMOS implementation. As a result, the feasibility
of the CMOS implementation of ADCs for such applications, or other
wide band applications, depends primarily on achieving a very small
input capacitance (large bandwidth) at the
desired accuracy.
In the flash architecture, the input capacitance is traded off for
the achievable accuracy. This tradeoff becomes tighter with
technology scaling. An effective way to ease this tradeoff is to use
resistive offset averaging. This permits the use of smaller area
transistors, leading to a reduction in the ADC input capacitance. In
addition, interpolation can be used to decrease the input
capacitance of flash ADCs. In an interpolating architecture, the
number of ADC input preamplifiers is reduced significantly, and a
resistor network interpolates
the missing zero-crossings needed for an N-bit conversion. The resistive network also averages
out the preamplifiers offsets. Consequently, an interpolating network works also as an averaging network.
The resistor network used for averaging or interpolation causes a
systematic non-linearity at the ADC transfer characteristics edges.
The common solution to this problem is to extend the preamplifiers
array beyond the input signal voltage range by using dummy
preamplifiers. However, this demands a corresponding extension of
the flash ADC reference-voltage resistor ladder. Since the voltage
headroom of the reference ladder is considered to be a main
bottleneck in the implementation of flash ADCs in deep-submicron
technologies with reduced supply voltage, extending the reference
voltage beyond the input voltage range is highly undesirable.
The principal objective of this thesis is to develop a new circuit
technique to enhance the bandwidth-accuracy product of flash ADCs.
Thus, first, a rigorous analysis of flash ADC architectures accuracy-bandwidth tradeoff is presented.
It is demonstrated that the interpolating architecture achieves a superior accuracy compared
to that of a full flash architecture for the same input capacitance, and hence would lead to
a higher bandwidth-accuracy product, especially in deep-submicron technologies that use low power supplies. Also, the
gain obtained, when interpolation is employed, is quantified. In addition, the limitations of a previous
claim, which suggests that an interpolating architecture is equivalent to an averaging
full flash architecture that trades off accuracy for the input capacitance, is presented. Secondly, a termination
technique for the averaging/interpolation network of flash ADC preamplifiers is devised. The proposed technique maintains the linearity of the ADC at the transfer
characteristics edges and cancels out the over-range voltage, consumed by the dummy preamplifiers. This makes flash ADCs more amenable for integration in deep-submicron CMOS technologies. In addition, the
elimination of this over-range voltage allows a larger
least-significant bit. As a result, a higher input referred offset
is tolerated, and a significant reductions in the ADC input
capacitance and
power dissipation are achieved at the same accuracy. Unlike a previous solution, the proposed
technique does not introduce negative transconductance at flash ADC preamplifiers array edges.
As a result, the offset averaging technique can be used efficiently.
To prove the resulting saving in the ADC input capacitance and power
dissipation that is attained by the proposed termination technique,
a 6-bit 1.6-GS/s flash ADC test chip is designed and implemented in
0.13-m CMOS technology. The ADC consumes 180 mW from a 1.5-V
supply and achieves a Signal-to-Noise-plus-Distortion Ratio (SNDR)
of 34.5 dB and 30 dB at 50-MHz and 1450-MHz input signal frequency,
respectively. The measured peak Integral-Non-Linearity (INL) and
Differential-Non-Linearity (DNL) are 0.42 LSB and 0.49 LSB,
respectively
Integrated Circuits for Medical Ultrasound Applications: Imaging and Beyond
Medical ultrasound has become a crucial part of modern society and continues to play a vital role in the diagnosis and treatment of illnesses. Over the past decades, the develop- ment of medical ultrasound has seen extraordinary progress as a result of the tremendous research advances in microelectronics, transducer technology and signal processing algorithms. How- ever, medical ultrasound still faces many challenges including power-efficient driving of transducers, low-noise recording of ultrasound echoes, effective beamforming in a non-linear, high- attenuation medium (human tissues) and reduced overall form factor. This paper provides a comprehensive review of the design of integrated circuits for medical ultrasound applications. The most important and ubiquitous modules in a medical ultrasound system are addressed, i) transducer driving circuit, ii) low- noise amplifier, iii) beamforming circuit and iv) analog-digital converter. Within each ultrasound module, some representative research highlights are described followed by a comparison of the state-of-the-art. This paper concludes with a discussion and recommendations for future research directions
Circuit techniques for low-voltage and high-speed A/D converters
The increasing digitalization in all spheres of electronics applications, from telecommunications systems to consumer electronics appliances, requires analog-to-digital converters (ADCs) with a higher sampling rate, higher resolution, and lower power consumption. The evolution of integrated circuit technologies partially helps in meeting these requirements by providing faster devices and allowing for the realization of more complex functions in a given silicon area, but simultaneously it brings new challenges, the most important of which is the decreasing supply voltage.
Based on the switched capacitor (SC) technique, the pipelined architecture has most successfully exploited the features of CMOS technology in realizing high-speed high-resolution ADCs. An analysis of the effects of the supply voltage and technology scaling on SC circuits is carried out, and it shows that benefits can be expected at least for the next few technology generations. The operational amplifier is a central building block in SC circuits, and thus a comparison of the topologies and their low voltage capabilities is presented.
It is well-known that the SC technique in its standard form is not suitable for very low supply voltages, mainly because of insufficient switch control voltage. Two low-voltage modifications are investigated: switch bootstrapping and the switched opamp (SO) technique. Improved circuit structures are proposed for both. Two ADC prototypes using the SO technique are presented, while bootstrapped switches are utilized in three other prototypes.
An integral part of an ADC is the front-end sample-and-hold (S/H) circuit. At high signal frequencies its linearity is predominantly determined by the switches utilized. A review of S/H architectures is presented, and switch linearization by means of bootstrapping is studied and applied to two of the prototypes. Another important parameter is sampling clock jitter, which is analyzed and then minimized with carefully-designed clock generation and buffering.
The throughput of ADCs can be increased by using parallelism. This is demonstrated on the circuit level with the double-sampling technique, which is applied to S/H circuits and a pipelined ADC. An analysis of nonidealities in double-sampling is presented. At the system level parallelism is utilized in a time-interleaved ADC. The mismatch of parallel signal paths produces errors, for the elimination of which a timing skew insensitive sampling circuit and a digital offset calibration are developed.
A total of seven prototypes are presented: two double-sampled S/H circuits, a time-interleaved ADC, an IF-sampling self-calibrated pipelined ADC, a current steering DAC with a deglitcher, and two pipelined ADCs employing the SO technique.reviewe
- …