22 research outputs found

    Representation, Characterization, and Mitigation of Noise in Quantum Processors

    Get PDF
    Quantum computers have the potential to outperform classical computers on several families of important problems, and have a great potential to revolutionize our understanding of computational models. However, the presence of noise deteriorates the output quality from near-term quantum computers and may even offset their advantage over classical computers. Studies on noise in these near-term quantum devices has thus become an important field of research during the past years. This thesis addresses several topics related to this subject including representing, quantifying, and mitigating noise in quantum processors. To study noise in quantum processors, it is first necessary to ask how noise can be accurately represented. This is the subject of Chapter 2. The conventional way is to use a gate-set, which include mathematical objects assigned to each component of a quantum processor, and compare individual gate-set elements to their ideal images. Here, we present some clarifications on this approach, pointing out that a gauge freedom exists in this representation. We demonstrate with experimentally relevant examples that there exists equally valid descriptions of the same experiment which distribute errors differently among objects in a gate-set, leading to different error rates. This leads us to rethink about the operational meaning to figures of merit for individual gate-set elements. We propose an alternative operational figure of merit for a gate-set, the mean variation error, and develop a protocol for measuring this figure. We performed numerical simulations for the mean variation error, illustrating how it suggests a potential issue with conventional randomized benchmarking approaches. Next, we study the problem of whether there exist sufficient assumptions under which the gauge ambiguity can be removed, allowing one to obtain error rates of individual gate-set elements in a more conventional manner. We focus on the subset of errors including state preparation and measurement (SPAM) errors, both subject to a gauge ambiguity issue. In Chapter 3, we provide a sufficient assumption that allows a separate SPAM error characterization, and propose a protocol that achieves this in the case of ideal quantum gates. In reality where quantum gates are imperfect, we derived bounds on the estimated SPAM error rates, based on gate error measures which can be estimated independently of SPAM processes. We tested the protocol on a publicly available quantum processor and demonstrated its validity by comparing our results with simulations. In Chapter 4, we present another protocol capable of separately characterizing SPAM errors, based on a different principle of algorithmic cooling (AC). We propose an alternative AC method called measurement-based algorithmic cooling (MBAC), which assumes the ability to perform (potentially imperfect) projective measurements on individual qubits and is available on various modern quantum computing platforms. Cooling reduces the error on initial states while keeping the measurement operations untouched, thereby breaking the gauge symmetry between the two. We demonstrate that MBAC can significantly reduce state preparation error under realistic assumptions, with a small overhead that can be upper bounded by measurable quantities. Thus, our results can be a valuable tool not only for benchmarking near-term quantum processors, but also for improving the quality of state preparation processes in an algorithmic manner. The capability of AC for improving initial state quality has inspired us to perform a parallel study on the thermodynamic cost of AC protocols. The motivation is that since cooling a subset of qubits may result in finite energy increase in its environment, applying them in certain platforms that are temperature-sensitive could induce a negative impact on the overall stability. Meanwhile, previous studies on AC have largely focused on subjects like cooling limits, without paying attention to their thermodynamics. Understanding the thermodynamic cost of AC is of both theoretical and practical interest. These results are presented in Chapter 5. After reviewing their procedure, cooling limits, and target state evolution of various AC protocols, we propose two efficiency measures based on the amount of work required, or the amount of heat released. We show how these measures are related to each other and how they can be computed for a given protocol. We then compare the previously studied protocols using both measures, providing suggestions on which ones to use when these protocols are to be carried out experimentally. We also propose improved protocols that are energetically more favorable over the original proposals. Finally, in Chapter 6, we present a study on a different family of methods aiming at reducing effective noise level in near-term hardware called quantum error mitigation (QEM). The principle behind various QEM approaches is to mimic outputs from the ideal circuit one wants to implement using noisy hardware. These methods recently became popular because many near-term hybrid quantum-classical algorithms only involve relatively shallow depth circuits and limited types of local measurements, implying a manageable cost of performing data processing to alleviate the effect of noise. Using some intuitions built upon classical and quantum communication scenarios, we clarify some fundamental distinctions between quantum error correction (QEC) and QEM. We then discuss the implications of noise invertibility for QEM, and give an explicit construction called Drazin-inverse for non-invertible noise, which is trace preserving while the commonly-used Moore-Penrose pseudoinverse may not be. Finally, we study the consequences of having an imperfect knowledge about the noise, and derive conditions when noise can be reduced using QEM

    Representing and Probing Errors in Quantum Information Processing Devices

    Get PDF
    The quality of quantum information processing devices has been improving at an unprecedented speed. How to faithfully represent the quality of these devices has become an increasingly imminent problem. In this thesis we focus on two aspects in representing and characterizing quantum devices. First, we discuss why most conventional quality metrics are not in principle appropriate to quantify experimentally-determined representations of gate-set elements, due to a gauge degree of freedom in quantum experiments. We then propose an operational quality measure for a gate-set and discuss its usefulness in representing degree of errors and improving experimental control. Second, we develop a protocol that separately and unambiguously characterizes state and measurement errors, relying on high-quality quantum gates. By integrating a method called randomized compiling, we derive a favorable upper bound for the effects of gate errors on the estimated parameters, and numerically demonstrate its performance in the presence of an adversarial gate error

    Reliability evaluation of light emitting diode package

    No full text
    Light emitting diode (LED) offers high efficiency and energy saving alternative to current light solution. It is believed to have a longer lifespan and higher reliability than conventional incandescent and fluorescent lamps. In this project, the High Brightness White LED from Osram went through accelerated life testing. Under the accelerated environmental condition, namely, temperature and humidity cycling, degradation caused by these 2 factors need to be investigated. Prior to that, the preparation for reliability test had to be careful considered. One consideration was the junction temperature which would vary if self heating exited in the LED during operation. The optical result was sensitive to the junction temperature. In order to avoid this phenomenon, a method used to determine an optimized pulse setting has been found in this project. The Total luminous flux was taken to determine the time to degradation (TTD).Others parameter such as the Scotopic to Photopic flux(SP) ratio and the blue to yellow emission intensity ratio was computed as well. With all these data, a failure analysis could be conducted to determine the underlining failure mechanismsBachelor of Engineerin

    Theory and Application of Weak Signal Detection Based on Stochastic Resonance Mechanism

    No full text
    Stochastic resonance is a new type of weak signal detection method. Compared with traditional noise suppression technology, stochastic resonance uses noise to enhance weak signal information, and there is a mechanism for the transfer of noise energy to signal energy. The purpose of this paper is to study the theory and application of weak signal detection based on stochastic resonance mechanism. This paper studies the stochastic resonance characteristics of the bistable circuit and conducts an experimental simulation of its circuit in the Multisim simulation environment. It is verified that the bistable circuit can achieve the stochastic resonance function very well, and it provides strong support for the actual production of the bistable circuit. This paper studies the stochastic resonance phenomenon of FHN neuron model and bistable model, analyzes the response of periodic signals and nonperiodic signals, verifies the effect of noise on stochastic resonance, and lays the foundation for subsequent experiments. It proposes to feedback the link and introduces a two-layer FHN neural network model to improve the weak signal detection performance under a variable noise background. The paper also proposes a multifault detection method based on the total empirical mode decomposition of sensitive intrinsic mode components with variable scale adaptive stochastic resonance. Using the weighted kurtosis index as the measurement index of the system output can not only maintain the similarity between the system output signal and the original signal but also be sensitive to impact characteristics, overcoming the missed or false detection of the traditional kurtosis index. Experimental research shows that this method has better noise suppression ability and a clear reproduction effect on details. Especially for images contaminated by strong noise (D = 500), compared with traditional restoration methods, it has better performance in subjective visual effects and signal-to-noise ratio evaluation

    Improving the Photostability of Red- and Green-Emissive Single-Molecule Fluorophores via Ni<sup>2+</sup> Mediated Excited Triplet-State Quenching

    No full text
    Methods to improve the photostability/photon output of fluorophores without compromising their signal stability are of paramount importance in single-molecule fluorescence (SMF) imaging applications. We show herein that Ni<sup>2+</sup> provides a suitable photostabilizing agent for three green-emissive (Cy3, ATTO532, Alexa532) and three red-emissive (Cy5, Alexa647, ATTO647N) fluorophores, four of which are regularly utilized in SMF studies. Ni<sup>2+</sup> works via photophysical quenching of the triplet excited state eliminating the potential for reactive intermediates being formed. Measurements of survival time, average intensity, and mean number of photons collected for the six fluorophores show that Ni<sup>2+</sup> increased their photostability 10- to 45-fold, comparable to photochemically based systems, without compromising the signal intensity or stability. Comparative studies with existing photostabilizing strategies enabled us to score different photochemical and photophysical stabilizing systems, based on their intended application. The realization that Ni<sup>2+</sup> allowed achieving a significant increase in photon output both for green- and red-emissive fluorophores positions Ni<sup>2+</sup> as a widely applicable tool to mitigate photobleaching, most suitable for multicolor single-molecule fluorescence studies

    Temporal Prediction of Coastal Water Quality Based on Environmental Factors with Machine Learning

    No full text
    The accurate forecast of algal blooms can provide helpful information for water resource management. However, the complex relationship between environmental variables and blooms makes the forecast challenging. In this study, we build a pipeline incorporating four commonly used machine learning models, Support Vector Regression (SVR), Random Forest Regression (RFR), Wavelet Analysis (WA)-Back Propagation Neural Network (BPNN) and WA-Long Short-Term Memory (LSTM), to predict chlorophyll-a in coastal waters. Two areas with distinct environmental features, the Neuse River Estuary, NC, USA—where machine learning models are applied for short-term algal bloom forecast at single stations for the first time—and the Scripps Pier, CA, USA, are selected. Applying the pipeline, we can easily switch from the NRE forecast to the Scripps Pier forecast with minimum model tuning. The pipeline successfully predicts the occurrence of algal blooms in both regions, with more robustness using WA-LSTM and WA-BPNN than SVR and RFR. The pipeline allows us to find the best results by trying different numbers of neuron hidden layers. The pipeline is easily adaptable to other coastal areas. Experience with the two study regions demonstrated that enrichment of the dataset by including dominant physical processes is necessary to improve chlorophyll prediction when applying it to other aquatic systems.ISSN:2077-131
    corecore