337 research outputs found
Demonstrating Quantum Error Correction that Extends the Lifetime of Quantum Information
The remarkable discovery of Quantum Error Correction (QEC), which can
overcome the errors experienced by a bit of quantum information (qubit), was a
critical advance that gives hope for eventually realizing practical quantum
computers. In principle, a system that implements QEC can actually pass a
"break-even" point and preserve quantum information for longer than the
lifetime of its constituent parts. Reaching the break-even point, however, has
thus far remained an outstanding and challenging goal. Several previous works
have demonstrated elements of QEC in NMR, ions, nitrogen vacancy (NV) centers,
photons, and superconducting transmons. However, these works primarily
illustrate the signatures or scaling properties of QEC codes rather than test
the capacity of the system to extend the lifetime of quantum information over
time. Here we demonstrate a QEC system that reaches the break-even point by
suppressing the natural errors due to energy loss for a qubit logically encoded
in superpositions of coherent states, or cat states of a superconducting
resonator. Moreover, the experiment implements a full QEC protocol by using
real-time feedback to encode, monitor naturally occurring errors, decode, and
correct. As measured by full process tomography, the enhanced lifetime of the
encoded information is 320 microseconds without any post-selection. This is 20
times greater than that of the system's transmon, over twice as long as an
uncorrected logical encoding, and 10% longer than the highest quality element
of the system (the resonator's 0, 1 Fock states). Our results illustrate the
power of novel, hardware efficient qubit encodings over traditional QEC
schemes. Furthermore, they advance the field of experimental error correction
from confirming the basic concepts to exploring the metrics that drive system
performance and the challenges in implementing a fault-tolerant system
Data-driven Channel Learning for Next-generation Communication Systems
University of Minnesota Ph.D. dissertation. October 2019. Major: Electrical/Computer Engineering. Advisor: Georgios Giannakis. 1 computer file (PDF); x, 116 pages.The turn of the decade has trademarked the `global society' as an information society, where the creation, distribution, integration, and manipulation of information have significant political, economic, technological, academic, and cultural implications. Its main drivers are digital information and communication technologies, which have resulted in a "data deluge", as the number of smart and Internet-capable devices increases rapidly. Unfortunately, establishing information infrastructure to collect data becomes more challenging particularly as communication networks for those devices become larger, denser, and more heterogeneous to meet the quality-of-service (QoS) for the users. Furthermore, scarcity in spectral resources due to an increased demand for mobile devices urges the development of a new methodology for wireless communications possibly facing unprecedented constraints both on hardware and software. At the same time, recent advances in machine learning tools enable statistical inference with efficiency as well as scalability in par with the volume and dimensionality of the data. These considerations justify the pressing need for machine learning tools that are amenable to new hardware and software constraints, and can scale with the size of networks, to facilitate the advanced operation of next-generation communication systems. The present thesis is centered on analytical and algorithmic foundations enabling statistical inference of critical information under practical hardware/software constraints to design and operate wireless communication networks. The vision is to establish a unified and comprehensive framework based on state-of-the-art data-driven learning and Bayesian inference tools to learn the channel-state information that is accurate yet efficient and non-demanding in terms of resources. The central goal is to theoretically, algorithmically, and experimentally demonstrate how valuable insights from data-driven learning can lead to solutions that markedly advance the state-of-the-art performance on inference of channel-state information. To this end, the present thesis investigates two main research thrusts: i) channel-gain cartography leveraging low-rank and sparsity; and ii) Bayesian approaches to channel-gain cartography for spatially heterogeneous environment. The aforementioned research thrusts introduce novel algorithms that aim to tackle the issues of next-generation communication networks. Potential of the proposed algorithms is showcased by rigorous theoretical results and extensive numerical tests
Design and debugging of multi-step analog to digital converters
With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process
Recommended from our members
Markov chain Monte Carlo analyses of longitudinal biomedical magnetic resonance data
Markov chain Monte Carlo simulation was used in an analysis of the data acquired in three longitudinal biomedical magnetic resonance studies. The first of these investigations uses a Bayesian nonlinear hierarchical random coefficients model to examine the longitudinal extracellular direct current (DC) potential and apparent diffusion coefficient (ADC) responses to focal ischaemia in the rat. The purpose is to perform a formal analysis of the temporal relationship between the two responses, and thus to examine the data for compatibility with a common latent (driving) process and, alternatively, the existence of an ADC threshold for anoxic depolarisation. The DC-potential and ADC transition parameter posterior probability distributions were generated, paying particular attention to the within-subject differences between the DC-potential and ADC transition characteristics. The results indicate that the DC-potential and ADC changes are not driven by a common latent process and, in addition, provide no evidence for a consistent ADC threshold associated with anoxic depolarisation.
The second analysis uses data acquired in a nuclear magnetic resonance spectroscopic study into the effects of intestinal ischaemia and subsequent reperfusion on liver metabolism in the rat. The purpose of the analysis is to examine the temporal relationship between energy status [inorganic phosphate to adenosine triphosphate ratio (PAR)] and the pH response, the former of which is an indicator of liver energy failure. The posterior distribution obtained for the PAR-pH onset time difference indicates that the pH response precedes the change in PAR, suggesting that intracellular acidosis cannot be ruled out as a contributing factor to the observed liver failure.
The third dataset was acquired in an electron spin resonance study of the Arrhenius behaviour of the rabbit muscle sarcoplasmic reticulum membrane. An MCMC Arrhenius plot changepoint analysis is used to estimate the order parameter 'transition' temperature
Recommended from our members
Energy-efficient data converter design in scaled CMOS technology
Data converters bridge the physical and digital worlds. They have been the crucial building blocks in modern electronic systems, and are expected to have a growing significance in the booming era of Internet-of-Things (IoT) and 5G communications. The applications raise energy-efficiency requirements for both low-speed and high-speed converters since they are widely deployed in wireless sensor nodes and portable devices. To explore the solutions, the author worked on three directions: 1) techniques to improve the efficiency of the low-speed converters including the comparator; 2) techniques to develop high-speed data converters including the reference stabilization; 3) new architecture to improve the efficiency of the capacitance-to-digital converter (CDC). In the first part, a power-efficient 10-bit SAR ADC featured with a gain-boosted dynamic comparator is presented. In energy-constrained applications, the converter is usually supplied with low supply voltage (e.g., 0.3 V-0.5 V), which reduces the comparator pre-amplifier (pre-amp) gain and results in higher noise. A novel comparator topology with a dynamic common-gate stage is proposed to increase the pre-amplification gain, thereby reducing noise and offset. Besides, statistical estimation and loading switching techniques are combined to further improve energy efficiency. A 40-nm CMOS prototype achieves a Walden FoM of 1.5 fJ/conversion-step while operating at 100-kS/s from a 0.5-V supply. To further improve the energy-efficiency of the comparator, a novel dynamic pre-amp is proposed. By using an inverter-based input pair powered by a floating reservoir capacitor, the pre-amp realizes both current reuse and dynamic bias, thereby significantly boosting g [subscript m] /I [subscript D] and reducing noise. Moreover, it greatly reduces the influence of the input common-mode (CM) voltage on the comparator performance, including noise, offset, and delay. A prototype comparator in 180-nm achieves 46-μV input-referred noise while consuming only 1 pJ per comparison under 1.2-V supply, which represents greater than 7 times energy efficiency boost compared to that of a Strong-Arm (SA) latch. The second part of this dissertation focuses on high-speed data converter techniques. A 10-bit high-speed two-stage loop-unrolled SAR ADC is presented. To reduce the SAR logic delay and power, each bit uses a dedicated comparator to store its output and generate an asynchronous clock for the next comparison. To suppress the comparator offset mismatch induced non-linearity, a shared pre-amp are employed in the second fine stage, which is implemented by a dynamic latch to avoid static power consumption. The prototype ADC in 40-nm CMOS achieves 55-dB peak SNDR at 200-MS/s sampling rate without any calibration. A key limiting factor for the SAR ADC to simultaneously achieve high speed and high resolution is the reference ripple settling problem caused by DAC switching. Unlike prior techniques that aim to minimize the reference ripple which requires large reference buffer power or on-chip decoupling capacitance area, this work proposes a new perspective: it provides an extra path for the full-sized reference ripple to couple to the comparator but with an opposite polarity, so that the effect of the reference ripple is canceled out, thus ensuring an accurate conversion result. The prototype 10-bit 120-MS/s SAR ADC is fabricated in 40-nm CMOS process and achieves an SNDR of 55 dB with only 3 pF reference decoupling capacitor. Finally, this dissertation also presents the design of an incremental time-domain two-step CDC. Unlike the classic two-step CDC, this work replaces the OTA-based active-RC integrator with a VCO-based integrator and performs time domain (TD) ΔΣ modulation. The VCO is mostly digital and consumes low power. Featuring the infinite DC gain in phase domain and intrinsic spatial phase quantization, this TDΔΣ enables a CDC design, achieving 85-dB SQNR by having only a 4-bit quantizer, a 1st-order loop and a low OSR of 15. The prototype fabricated in 40-nm CMOS achieves a resolution of 0.29 fF while dissipating only 0.083 nJ per conversion, which improves the energy efficiency by greater than 2 times comparing to that of state-of-the-art CDCsElectrical and Computer Engineerin
Benelux meeting on systems and control, 23rd, March 17-19, 2004, Helvoirt, The Netherlands
Book of abstract
Millimeter and sub-millimeter wave radiometers for atmospheric remote sensing from CubeSat platforms
2018 Fall.Includes bibliographical references.To view the abstract, please see the full text of the document
Sistemas de suporte à condução autónoma adequados a plataforma robótica 4-wheel skid-steer: percepção, movimento e simulação
As competições de robótica móvel desempenham papel preponderante na difusão da ciência e da engenharia ao público em geral. E também um espaço dedicado ao ensaio e comparação de diferentes estratégias e abordagens aos diversos desafios da robótica móvel. Uma das vertentes que tem reunido maior interesse nos promotores deste género de iniciativas e entre o público em geral são as competições de condução autónoma. Tipicamente as Competi¸c˜oes de Condução Autónoma (CCA) tentam reproduzir um ambiente semelhante a uma estrutura rodoviária tradicional, no qual sistemas autónomos deverão dar resposta a um conjunto variado de desafios que vão desde a deteção da faixa de rodagem `a interação com distintos elementos que compõem uma estrutura rodoviária típica, do planeamento trajetórias à localização. O objectivo desta dissertação de mestrado visa documentar o processo de desenho e concepção de uma plataforma robótica móvel do tipo 4-wheel skid-steer para realização de tarefas de condução autónoma em ambiente estruturado numa pista que pretende replicar uma via de circulação automóvel dotada de sinalética básica e alguns obstáculos. Paralelamente, a dissertação pretende também fazer uma análise qualitativa entre o processo de simulação e a sua transposição para uma plataforma robótica física. inferir sobre a diferenças de performance e de comportamento.Mobile robotics competitions play an important role in the diffusion of science and engineering to the general public. It is also a space dedicated to test and compare different strategies and approaches to several challenges of mobile robotics. One of the aspects that has attracted more the interest of promoters for this kind of initiatives and general public is the autonomous driving competitions. Typically, Autonomous Driving Competitions (CCAs) attempt to replicate an environment similar to a traditional road structure, in which autonomous systems should respond to a wide variety of challenges ranging from lane detection to interaction with distinct elements that exist in a typical road structure, from planning trajectories to location. The aim of this master’s thesis is to document the process of designing and endow a 4-wheel skid-steer mobile robotic platform to carry out autonomous driving tasks in a structured environment on a track that intends to replicate a motorized roadway including signs and obstacles. In parallel, the dissertation also intends to make a qualitative analysis between the simulation process and the transposition of the developed algorithm to a physical robotic platform, analysing the differences in performance and behavior
- …