90 research outputs found

    Digital Background Self-Calibration Technique for Compensating Transition Offsets in Reference-less Flash ADCs

    Get PDF
    This Dissertation focusses on proving that background calibration using adaptive algorithms are low-cost, stable and effective methods for obtaining high accuracy in flash A/D converters. An integrated reference-less 3-bit flash ADC circuit has been successfully designed and taped out in UMC 180 nm CMOS technology in order to prove the efficiency of our proposed background calibration. References for ADC transitions have been virtually implemented built-in in the comparators dynamic-latch topology by a controlled mismatch added to each comparator input front-end. An external very simple DAC block (calibration bank) allows control the quantity of mismatch added in each comparator front-end and, therefore, compensate the offset of its effective transition with respect to the nominal value. In order to assist to the estimation of the offset of the prototype comparators, an auxiliary A/D converter with higher resolution and lower conversion speed than the flash ADC is used: a 6-bit capacitive-DAC SAR type. Special care in synchronization of analogue sampling instant in both ADCs has been taken into account. In this thesis, a criterion to identify the optimum parameters of the flash ADC design with adaptive background calibration has been set. With this criterion, the best choice for dynamic latch architecture, calibration bank resolution and flash ADC resolution are selected. The performance of the calibration algorithm have been tested, providing great programmability to the digital processor that implements the algorithm, allowing to choose the algorithm limits, accuracy and quantization errors in the arithmetic. Further, systematic controlled offset can be forced in the comparators of the flash ADC in order to have a more exhaustive test of calibration

    A 10-b Fourth-Order Quadrature Bandpass Continuous-Time ΣΔ Modulator With 33-MHz Bandwidth for a Dual-Channel GNSS Receiver

    Get PDF
    This document is the Accepted Manuscript version of the following article: Junfeng Zhang, Yang Xu, Zehong Zhang, Yichuang Sun, Zhihua Wang, and Baoyong Chi, ‘A 10-b Fourth-Order Quadrature Bandpass Continuous-Time ΣΔ Modulator With 33-MHz Bandwidth for a Dual-Channel GNSS Receiver’, IEEE Transactions on Microwave Theory and Practice, Vol. 65 (4): 1303-1314, first published online 16 February 2017. The version of record is available online at DOI: 10.1109/TMTT.2017.266237, Published by IEEE. © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.A fourth-order quadrature bandpass continuous-time sigma-delta modulator for a dual-channel global navigation satellite system (GNSS) receiver is presented. With a bandwidth (BW) of 33 MHz, the modulator is able to digitalize the downconverted GNSS signals in two adjacent signal bands simultaneously, realizing dual-channel GNSS reception with one receiver channel instead of two independent receiver channels. To maintain the loop-stability of the high-order architecture, any extra loop phase shifting should be minimized. In the system architecture, a feedback and feedforward hybrid architecture is used to implement the fourth-order loop-filter, and a return-to-zero (RZ) feedback after the discrete-time differential operation is introduced into the input of the final integrator to realize the excess loop delay compensation, saving a spare summing amplifier. In the circuit implementation, power-efficient amplifiers with high-frequency active feedforward and antipole-splitting techniques are employed in the active RC integrators, and self-calibrated comparators are used to implement the low-power 3-b quantizers. These power saving techniques help achieve superior figure of merit for the presented modulator. With a sampling rate of 460 MHz, current-steering digital-analog converters are chosen to guarantee high conversion speed. Implemented in only 180-nm CMOS, the modulator achieves 62.1-dB peak signal to noise and distortion ratio, 64-dB dynamic range, and 59.3-dB image rejection ratio, with a BW of 33 MHz, and consumes 54.4 mW from a 1.8 V power supply.Peer reviewe

    A facility to Search for Hidden Particles (SHiP) at the CERN SPS

    Get PDF
    A new general purpose fixed target facility is proposed at the CERN SPS accelerator which is aimed at exploring the domain of hidden particles and make measurements with tau neutrinos. Hidden particles are predicted by a large number of models beyond the Standard Model. The high intensity of the SPS 400~GeV beam allows probing a wide variety of models containing light long-lived exotic particles with masses below O{\cal O}(10)~GeV/c2^2, including very weakly interacting low-energy SUSY states. The experimental programme of the proposed facility is capable of being extended in the future, e.g. to include direct searches for Dark Matter and Lepton Flavour Violation.Comment: Technical Proposa

    Design and Development of a Multi-Purpose Input Output Controller Board for the SPES Control System

    Get PDF
    This PhD work has been carried out at the Legnaro National Laboratories (LNL), one of the four national labs of the National Institute for Nuclear Physics (INFN). The mission of LNL is to perform research in the field of nuclear physics and nuclear astrophysics together with emerging technologies. Technological research and innovation are the key to promote excellence in science, to excite competitive industries and to establish a better society. The research activities concerning electronics and computer science are an essential base to develop the control system of the Selective Production of Exotic Species (SPES) project. Nowadays, SPES is the most important project commissioned at LNL and represents the future of the Lab. It is a second generation Isotope Separation On-Line (ISOL) radioactive ion beam facility intended for fundamental nuclear physics research as well as experimental applications in different fields of science, such as nuclear medicine; radio-pharmaceutical production for therapy and diagnostic. The design of the SPES control system demands innovative technologies to embed the control of several appliances with different requirements and performing different tasks spanning from data sharing and visualization, data acquisition and storage, networking, security and surveillance operations, beam transport and diagnostic. The real time applications and fast peripherals control commonly found in the distributed control network of particle accelerators are accompanied by the challenge of developing custom embedded systems. In this context, the proposed PhD work describes the design and development of a multi-purpose Input Output Controller (IOC) board capable of embedding the control of typical accelerator instrumentation involved in the automatic beam transport system foreseen for the SPES project. The idea behind this work is to extend the control reach to the single device level without losing in modularity and standardization. The outcome of the research work is a general purpose embedded computer that will be the base for standardizing the hardware layer of the frontend computers in the SPES distributed control system. The IOC board is a Computer-on-Module (COM) carrier board designed to host any COM Express type 6 module and is equipped with a Field Programmable Gate Array (FPGA) and user application specific I/O connection solutions not found in a desktop pc. All the generic pc functionalities are readily available in off-the-shelf modules and the result is a custom motherboard that bridges the gap between custom developments and commercial personal computers. The end user can deal with a general-purpose pc with a high level of hardware abstraction besides being able to exploit the on-board FPGA potentialities in terms of fast peripherals control and real time digital data processing. This document opens with an introductory chapter about the SPES project and its control system architecture and technology before to describe the IOC board design, prototyping, and characterization. The thesis ends describing the installation in the field of the IOC board which is the core of the new diagnostics data readout and signal processing system. The results of the tests performed under real beam conditions prove that the new hardware extends the current sensitivity to the pA range, addressing the SPES requirements, and prove that the IOC board is a reliable solution to standardize the control of several appliances in the SPES accelerators complex where it will be embedded into physical equipment, or in their proximity, and will control and monitor their operation replacing the legacy VME technology. The installation in the field of the IOC board represents a great personal reward and crowns these years of busy time during which I turned what was just an idea in 2014, into a working embedded computer today

    A facility to Search for Hidden Particles (SHiP) at the CERN SPS

    Get PDF
    A new general purpose fixed target facility is proposed at the CERN SPS accelerator which is aimed at exploring the domain of hidden particles and make measurements with tau neutrinos. Hidden particles are predicted by a large number of models beyond the Standard Model. The high intensity of the SPS 400~GeV beam allows probing a wide variety of models containing light long-lived exotic particles with masses below O{\cal O}(10)~GeV/c2^2, including very weakly interacting low-energy SUSY states. The experimental programme of the proposed facility is capable of being extended in the future, e.g. to include direct searches for Dark Matter and Lepton Flavour Violation

    Circuit Design

    Get PDF
    Circuit Design = Science + Art! Designers need a skilled "gut feeling" about circuits and related analytical techniques, plus creativity, to solve all problems and to adhere to the specifications, the written and the unwritten ones. You must anticipate a large number of influences, like temperature effects, supply voltages changes, offset voltages, layout parasitics, and numerous kinds of technology variations to end up with a circuit that works. This is challenging for analog, custom-digital, mixed-signal or RF circuits, and often researching new design methods in relevant journals, conference proceedings and design tools unfortunately gives the impression that just a "wild bunch" of "advanced techniques" exist. On the other hand, state-of-the-art tools nowadays indeed offer a good cockpit to steer the design flow, which include clever statistical methods and optimization techniques.Actually, this almost presents a second breakthrough, like the introduction of circuit simulators 40 years ago! Users can now conveniently analyse all the problems (discover, quantify, verify), and even exploit them, for example for optimization purposes. Most designers are caught up on everyday problems, so we fit that "wild bunch" into a systematic approach for variation-aware design, a designer's field guide and more. That is where this book can help! Circuit Design: Anticipate, Analyze, Exploit Variations starts with best-practise manual methods and links them tightly to up-to-date automation algorithms. We provide many tractable examples and explain key techniques you have to know. We then enable you to select and setup suitable methods for each design task - knowing their prerequisites, advantages and, as too often overlooked, their limitations as well. The good thing with computers is that you yourself can often verify amazing things with little effort, and you can use software not only to your direct advantage in solving a specific problem, but also for becoming a better skilled, more experienced engineer. Unfortunately, EDA design environments are not good at all to learn about advanced numerics. So with this book we also provide two apps for learning about statistic and optimization directly with circuit-related examples, and in real-time so without the long simulation times. This helps to develop a healthy statistical gut feeling for circuit design. The book is written for engineers, students in engineering and CAD / methodology experts. Readers should have some background in standard design techniques like entering a design in a schematic capture and simulating it, and also know about major technology aspects

    Circuit Design

    Get PDF
    Circuit Design = Science + Art! Designers need a skilled "gut feeling" about circuits and related analytical techniques, plus creativity, to solve all problems and to adhere to the specifications, the written and the unwritten ones. You must anticipate a large number of influences, like temperature effects, supply voltages changes, offset voltages, layout parasitics, and numerous kinds of technology variations to end up with a circuit that works. This is challenging for analog, custom-digital, mixed-signal or RF circuits, and often researching new design methods in relevant journals, conference proceedings and design tools unfortunately gives the impression that just a "wild bunch" of "advanced techniques" exist. On the other hand, state-of-the-art tools nowadays indeed offer a good cockpit to steer the design flow, which include clever statistical methods and optimization techniques.Actually, this almost presents a second breakthrough, like the introduction of circuit simulators 40 years ago! Users can now conveniently analyse all the problems (discover, quantify, verify), and even exploit them, for example for optimization purposes. Most designers are caught up on everyday problems, so we fit that "wild bunch" into a systematic approach for variation-aware design, a designer's field guide and more. That is where this book can help! Circuit Design: Anticipate, Analyze, Exploit Variations starts with best-practise manual methods and links them tightly to up-to-date automation algorithms. We provide many tractable examples and explain key techniques you have to know. We then enable you to select and setup suitable methods for each design task - knowing their prerequisites, advantages and, as too often overlooked, their limitations as well. The good thing with computers is that you yourself can often verify amazing things with little effort, and you can use software not only to your direct advantage in solving a specific problem, but also for becoming a better skilled, more experienced engineer. Unfortunately, EDA design environments are not good at all to learn about advanced numerics. So with this book we also provide two apps for learning about statistic and optimization directly with circuit-related examples, and in real-time so without the long simulation times. This helps to develop a healthy statistical gut feeling for circuit design. The book is written for engineers, students in engineering and CAD / methodology experts. Readers should have some background in standard design techniques like entering a design in a schematic capture and simulating it, and also know about major technology aspects

    Belle II Technical Design Report

    Full text link
    The Belle detector at the KEKB electron-positron collider has collected almost 1 billion Y(4S) events in its decade of operation. Super-KEKB, an upgrade of KEKB is under construction, to increase the luminosity by two orders of magnitude during a three-year shutdown, with an ultimate goal of 8E35 /cm^2 /s luminosity. To exploit the increased luminosity, an upgrade of the Belle detector has been proposed. A new international collaboration Belle-II, is being formed. The Technical Design Report presents physics motivation, basic methods of the accelerator upgrade, as well as key improvements of the detector.Comment: Edited by: Z. Dole\v{z}al and S. Un

    The LHCb Upgrade I

    Get PDF
    The LHCb upgrade represents a major change of the experiment. The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods. Readout of all detectors into an all-software trigger is central to the new design, facilitating the reconstruction of events at the maximum LHC interaction rate, and their selection in real time. The experiment's tracking system has been completely upgraded with a new pixel vertex detector, a silicon tracker upstream of the dipole magnet and three scintillating fibre tracking stations downstream of the magnet. The whole photon detection system of the RICH detectors has been renewed and the readout electronics of the calorimeter and muon systems have been fully overhauled. The first stage of the all-software trigger is implemented on a GPU farm. The output of the trigger provides a combination of totally reconstructed physics objects, such as tracks and vertices, ready for final analysis, and of entire events which need further offline reprocessing. This scheme required a complete revision of the computing model and rewriting of the experiment's software

    The LHCb Upgrade I

    Get PDF
    The LHCb upgrade represents a major change of the experiment. The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods. Readout of all detectors into an all-software trigger is central to the new design, facilitating the reconstruction of events at the maximum LHC interaction rate, and their selection in real time. The experiment's tracking system has been completely upgraded with a new pixel vertex detector, a silicon tracker upstream of the dipole magnet and three scintillating fibre tracking stations downstream of the magnet. The whole photon detection system of the RICH detectors has been renewed and the readout electronics of the calorimeter and muon systems have been fully overhauled. The first stage of the all-software trigger is implemented on a GPU farm. The output of the trigger provides a combination of totally reconstructed physics objects, such as tracks and vertices, ready for final analysis, and of entire events which need further offline reprocessing. This scheme required a complete revision of the computing model and rewriting of the experiment's software
    • …
    corecore