255 research outputs found

    Electronics for Sensors

    Get PDF
    The aim of this Special Issue is to explore new advanced solutions in electronic systems and interfaces to be employed in sensors, describing best practices, implementations, and applications. The selected papers in particular concern photomultiplier tubes (PMTs) and silicon photomultipliers (SiPMs) interfaces and applications, techniques for monitoring radiation levels, electronics for biomedical applications, design and applications of time-to-digital converters, interfaces for image sensors, and general-purpose theory and topologies for electronic interfaces

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    Novel Transistor Resistance Variation-based Physical Unclonable Functions with On-Chip Voltage-to-Digital Converter Designed for Use in Cryptographic and Authentication Applications

    Get PDF
    Security mechanisms such as encryption, authentication, and feature activation depend on the integrity of embedded secret keys. Currently, this keying material is stored as digital bitstrings in non-volatile memory on FPGAs and ASICs. However, secrets stored this way are not secure against a determined adversary, who can use specialized probing attacks to uncover the secret. Furthermore, storing these pre-determined bitstrings suffers from the disadvantage of not being able to generate the key only when needed. Physical Unclonable Functions (PUFs) have emerged as a superior alternative to this. A PUF is an embedded Integrated Circuit (IC) structure that is designed to leverage random variations in physical parameters of on-chip components as the source of entropy for generating random and unique bitstrings. PUFs also incorporate an on-chip infrastructure for measuring and digitizing these variations in order to produce bitstrings. Additionally, PUFs are designed to reproduce a bitstring on-demand and therefore eliminate the need for on-chip storage. In this work, two novel PUFs are presented that leverage the random variations observed in the resistance of transistors. A thorough analysis of the randomness, uniqueness and stability characteristics of the bitstrings generated by these PUFs is presented. All results shown are based on an exhaustive testing of a set of 63 chips designed with numerous copies of the PUFs on each chip and fabricated in a 90nm nine-metal layer technology. An on-chip voltage-to-digital conversion technique is also presented and tested on the set of 63 chips. Statistical results of the bitstrings generated by the on-chip digitization technique are compared with that of the voltage-derived bitstrings to evaluate the efficacy of the digitization technique. One of the most important quality metrics of the PUF and the on-chip voltage-to-digital converter, the stability, is evaluated through a lengthy temperature-voltage testing over the range of -40C to +85C and voltage variations of +/- 10% of the nominal supply voltage. The stability of both the bitstrings and the underlying physical parameters is evaluated for the PUFs using the data collected from the hardware experiments and supported with software simulations conducted on the devices. Several novel techniques are proposed and successfully tested that address known issues related to instability of PUFs to changing temperature and voltage conditions, thus rendering our PUFs more resilient to these changing conditions faced in practical use. Lastly, an analysis of the stability to changing temperature and voltage variations of a third PUF that leverages random variations in the resistance of the metal wires in the power and ground grids of a chip is also presented

    Synthesis and monolithic integration of analogue signal processing networks

    Get PDF
    Data traffic of future 5G telecommunication systems is projected to increase 10 000-fold compared to current rates. 5G fronthaul links are therefore expected to operate in the mm-wave spectrum with some preliminary International Telecommunication Union specifications set for the 71-76 and 81-86 GHz bands. Processing 5 GHz as a single contiguous band in real-time, using existing digital signal processing (DSP) systems, is exceedingly challenging. A similar challenge exists in radio astronomy, with the Square Kilometer Array project expecting data throughput rates of 15 Tbits/s at its completion. Speed improvements on existing state-of-the-art DSPs of 2-3 orders of magnitude are therefore required to meet future demands. One possible mitigating approach to processing wideband data in real-time is to replace some DSP blocks with analog signal processing (ASP) equivalents, since analogue devices outperform their digital counterparts in terms of cost, power consumption and the maximum attainable bandwidth. The fundamental building block of any ASP is an all-pass network of prescribed response, which can always be synthesized by cascaded first- and second-order all-pass sections (with two cascaded first-order sections being a special case of the latter). The monolithic integration of all-pass networks in commercial CMOS and BiCMOS technology nodes is a key consideration for commercial adaptation of ASPs, since it supports mass production at reduced costs and operating power requirements, making the ASP approach feasible. However, this integration has presented a number of yet unsolved challenges. Firstly, the state-of-the-art methods for synthesizing quasi-arbitrary group delay functions using all-pass elements lack a theoretical synthesis procedure that guarantees minimum-order networks. In this work an analytically-based solution to the synthesis problem is presented that produces an all-pass network with a response approximating the required group delay to within an arbitrary minimax error. This method is shown to work for any physical realization of second-order all-pass elements, is guaranteed to converge to a global optimum solution without any choice of seed values as an input, and allows synthesis of pre-defined networks described either analytically or numerically. Secondly, second-order all-pass networks are currently primarily implemented in off-chip planar media, which is unsuited for high volume production. Component sensitivity, process tolerances and on-chip parasitics often make proposed on-chip designs impractical. Consequently, to date, no measured results of a dispersive on-chip second-order all-pass network suitable for ASP applications (delay Q-value (QD) larger than 1) have been presented in either CMOS or BiCMOS technology nodes. In this work, the first ever on-chip CMOS second-order all-pass network is proposed with a measured QD-value larger than 1. Measurements indicate a post-tuning bandwidth of 280 MHz, peak-to-nominal delay variation of 10 ns, QD-value of 1.15 and magnitude variation of 3.1 dB. An active on-chip mm-wave second-order all-pass network is further demonstrated in a 130 nm SiGe BiCMOS technology node with a bandwidth of 40 GHz, peak-to-nominal delay of 62 ps, QD-value of 3.6 and a magnitude ripple of 1.4 dB. This is the first time that measurement results of a mm-wave bandwidth second-order all-pass network have been reported. This work therefore presents the first step to monolithically integrating ASP solutions to conventional DSP problems, thereby enabling ultra-wideband signal processing on-chip in commercial technology nodes.Thesis (PhD)--University of Pretoria, 2018.Square Kilometer Array (SKA) project - postgraduate scholarshipElectrical, Electronic and Computer EngineeringPhDUnrestricte

    Millimeter-Precision Laser Rangefinder Using a Low-Cost Photon Counter

    Get PDF
    In this book we successfully demonstrate a millimeter-precision laser rangefinder using a low-cost photon counter. An application-specific integrated circuit (ASIC) comprises timing circuitry and single-photon avalanche diodes (SPADs) as the photodetectors. For the timing circuitry, a novel binning architecture for sampling the received signal is proposed which mitigates non-idealities that are inherent to a system with SPADs and timing circuitry in one chip

    Analysis and Design of Resilient VLSI Circuits

    Get PDF
    The reliable operation of Integrated Circuits (ICs) has become increasingly difficult to achieve in the deep sub-micron (DSM) era. With continuously decreasing device feature sizes, combined with lower supply voltages and higher operating frequencies, the noise immunity of VLSI circuits is decreasing alarmingly. Thus, VLSI circuits are becoming more vulnerable to noise effects such as crosstalk, power supply variations and radiation-induced soft errors. Among these noise sources, soft errors (or error caused by radiation particle strikes) have become an increasingly troublesome issue for memory arrays as well as combinational logic circuits. Also, in the DSM era, process variations are increasing at an alarming rate, making it more difficult to design reliable VLSI circuits. Hence, it is important to efficiently design robust VLSI circuits that are resilient to radiation particle strikes and process variations. The work presented in this dissertation presents several analysis and design techniques with the goal of realizing VLSI circuits which are tolerant to radiation particle strikes and process variations. This dissertation consists of two parts. The first part proposes four analysis and two design approaches to address radiation particle strikes. The analysis techniques for the radiation particle strikes include: an approach to analytically determine the pulse width and the pulse shape of a radiation induced voltage glitch in combinational circuits, a technique to model the dynamic stability of SRAMs, and a 3D device-level analysis of the radiation tolerance of voltage scaled circuits. Experimental results demonstrate that the proposed techniques for analyzing radiation particle strikes in combinational circuits and SRAMs are fast and accurate compared to SPICE. Therefore, these analysis approaches can be easily integrated in a VLSI design flow to analyze the radiation tolerance of such circuits, and harden them early in the design flow. From 3D device-level analysis of the radiation tolerance of voltage scaled circuits, several non-intuitive observations are made and correspondingly, a set of guidelines are proposed, which are important to consider to realize radiation hardened circuits. Two circuit level hardening approaches are also presented to harden combinational circuits against a radiation particle strike. These hardening approaches significantly improve the tolerance of combinational circuits against low and very high energy radiation particle strikes respectively, with modest area and delay overheads. The second part of this dissertation addresses process variations. A technique is developed to perform sensitizable statistical timing analysis of a circuit, and thereby improve the accuracy of timing analysis under process variations. Experimental results demonstrate that this technique is able to significantly reduce the pessimism due to two sources of inaccuracy which plague current statistical static timing analysis (SSTA) tools. Two design approaches are also proposed to improve the process variation tolerance of combinational circuits and voltage level shifters (which are used in circuits with multiple interacting power supply domains), respectively. The variation tolerant design approach for combinational circuits significantly improves the resilience of these circuits to random process variations, with a reduction in the worst case delay and low area penalty. The proposed voltage level shifter is faster, requires lower dynamic power and area, has lower leakage currents, and is more tolerant to process variations, compared to the best known previous approach. In summary, this dissertation presents several analysis and design techniques which significantly augment the existing work in the area of resilient VLSI circuit design

    Circuit Design

    Get PDF
    Circuit Design = Science + Art! Designers need a skilled "gut feeling" about circuits and related analytical techniques, plus creativity, to solve all problems and to adhere to the specifications, the written and the unwritten ones. You must anticipate a large number of influences, like temperature effects, supply voltages changes, offset voltages, layout parasitics, and numerous kinds of technology variations to end up with a circuit that works. This is challenging for analog, custom-digital, mixed-signal or RF circuits, and often researching new design methods in relevant journals, conference proceedings and design tools unfortunately gives the impression that just a "wild bunch" of "advanced techniques" exist. On the other hand, state-of-the-art tools nowadays indeed offer a good cockpit to steer the design flow, which include clever statistical methods and optimization techniques.Actually, this almost presents a second breakthrough, like the introduction of circuit simulators 40 years ago! Users can now conveniently analyse all the problems (discover, quantify, verify), and even exploit them, for example for optimization purposes. Most designers are caught up on everyday problems, so we fit that "wild bunch" into a systematic approach for variation-aware design, a designer's field guide and more. That is where this book can help! Circuit Design: Anticipate, Analyze, Exploit Variations starts with best-practise manual methods and links them tightly to up-to-date automation algorithms. We provide many tractable examples and explain key techniques you have to know. We then enable you to select and setup suitable methods for each design task - knowing their prerequisites, advantages and, as too often overlooked, their limitations as well. The good thing with computers is that you yourself can often verify amazing things with little effort, and you can use software not only to your direct advantage in solving a specific problem, but also for becoming a better skilled, more experienced engineer. Unfortunately, EDA design environments are not good at all to learn about advanced numerics. So with this book we also provide two apps for learning about statistic and optimization directly with circuit-related examples, and in real-time so without the long simulation times. This helps to develop a healthy statistical gut feeling for circuit design. The book is written for engineers, students in engineering and CAD / methodology experts. Readers should have some background in standard design techniques like entering a design in a schematic capture and simulating it, and also know about major technology aspects

    NASA Tech Briefs, February 2001

    Get PDF
    The topics include: 1) Application Briefs; 2) National Design Engineering Show Preview; 3) Marketing Inventions to Increase Income; 4) A Personal-Computer-Based Physiological Training System; 5) Reconfigurable Arrays of Transistors for Evolvable Hardware; 6) Active Tactile Display Device for Reading by a Blind Person; 7) Program Automates Management of IBM VM Computer Systems; 8) System for Monitoring the Environment of a Spacecraft Launch; 9) Measurement of Stresses and Strains in Muscles and Tendons; 10) Optical Measurement of Temperatures in Muscles and Tendons; 11) Small Low-Temperature Thermometer With Nanokelvin Resolution; 12) Heterodyne Interferometer With Phase-Modulated Carrier; 13) Rechargeable Batteries Based on Intercalation in Graphite; 14) Signal Processor for Doppler Measurements in Icing Research; 15) Model Optimizes Drying of Wet Sheets; 16) High-Performance POSS-Modified Polymeric Composites; 17) Model Simulates Semi-Solid Material Processing; 18) Modular Cryogenic Insulation; 19) Passive Venting for Alleviating Helicopter Tail-Boom Loads; 20) Computer Program Predicts Rocket Noise; 21) Process for Polishing Bare Aluminum to High Optical Quality; 22) External Adhesive Pressure-Wall Patch; 23) Java Implementation of Information-Sharing Protocol; 24) Electronic Bulletin Board Publishes Schedules in Real Time; 25) Apparatus Would Extract Water From the Martian Atmosphere; 26) Review of Research on Supercritical vs Subcritical Fluids; 27) Hybrid Regenerative Water-Recycling System; 28) Study of Fusion-Driven Plasma Thruster With Magnetic Nozzle; 29) Liquid/Vapor-Hydrazine Thruster Would Produce Small Impulses; and 30) Thruster Based on Sublimation of Solid Hydrazin

    Biomedical Engineering

    Get PDF
    Biomedical engineering is currently relatively wide scientific area which has been constantly bringing innovations with an objective to support and improve all areas of medicine such as therapy, diagnostics and rehabilitation. It holds a strong position also in natural and biological sciences. In the terms of application, biomedical engineering is present at almost all technical universities where some of them are targeted for the research and development in this area. The presented book brings chosen outputs and results of research and development tasks, often supported by important world or European framework programs or grant agencies. The knowledge and findings from the area of biomaterials, bioelectronics, bioinformatics, biomedical devices and tools or computer support in the processes of diagnostics and therapy are defined in a way that they bring both basic information to a reader and also specific outputs with a possible further use in research and development
    • …
    corecore