1,445 research outputs found

    Implementing and characterizing precise multi-qubit measurements

    Full text link
    There are two general requirements to harness the computational power of quantum mechanics: the ability to manipulate the evolution of an isolated system and the ability to faithfully extract information from it. Quantum error correction and simulation often make a more exacting demand: the ability to perform non-destructive measurements of specific correlations within that system. We realize such measurements by employing a protocol adapted from [S. Nigg and S. M. Girvin, Phys. Rev. Lett. 110, 243604 (2013)], enabling real-time selection of arbitrary register-wide Pauli operators. Our implementation consists of a simple circuit quantum electrodynamics (cQED) module of four highly-coherent 3D transmon qubits, collectively coupled to a high-Q superconducting microwave cavity. As a demonstration, we enact all seven nontrivial subset-parity measurements on our three-qubit register. For each we fully characterize the realized measurement by analyzing the detector (observable operators) via quantum detector tomography and by analyzing the quantum back-action via conditioned process tomography. No single quantity completely encapsulates the performance of a measurement, and standard figures of merit have not yet emerged. Accordingly, we consider several new fidelity measures for both the detector and the complete measurement process. We measure all of these quantities and report high fidelities, indicating that we are measuring the desired quantities precisely and that the measurements are highly non-demolition. We further show that both results are improved significantly by an additional error-heralding measurement. The analyses presented here form a useful basis for the future characterization and validation of quantum measurements, anticipating the demands of emerging quantum technologies.Comment: 10 pages, 5 figures, plus supplemen

    Protection Scheme of Power Transformer Based on Time–Frequency Analysis and KSIR-SSVM

    Get PDF
    The aim of this paper is to extend a hybrid protection plan for Power Transformer (PT) based on MRA-KSIR-SSVM. This paper offers a new scheme for protection of power transformers to distinguish internal faults from inrush currents. Some significant characteristics of differential currents in the real PT operating circumstances are extracted. In this paper, Multi Resolution Analysis (MRA) is used as Time–Frequency Analysis (TFA) for decomposition of Contingency Transient Signals (CTSs), and feature reduction is done by Kernel Sliced Inverse Regression (KSIR). Smooth Supported Vector Machine (SSVM) is utilized for classification. Integration KSIR and SSVM is tackled as most effective and fast technique for accurate differentiation of the faulted and unfaulted conditions. The Particle Swarm Optimization (PSO) is used to obtain optimal parameters of the classifier. The proposed structure for Power Transformer Protection (PTP) provides a high operating accuracy for internal faults and inrush currents even in noisy conditions. The efficacy of the proposed scheme is tested by means of numerous inrush and internal fault currents. The achieved results are utilized to verify the suitability and the ability of the proposed scheme to make a distinction inrush current from internal fault. The assessment results illustrate that proposed scheme presents an enhancement of distinguish inrush current from internal fault over the method to be compared without Dimension Reduction (DR)

    Design of dynamic experiments for black-box model discrimination

    Get PDF
    Diverse domains of science and engineering require and use mechanistic mathematical models, e.g. systems of differential algebraic equations. Such models often contain uncertain parameters to be estimated from data. Consider a dynamic model discrimination setting where we wish to chose: (i) what is the best mechanistic, time-varying model and (ii) what are the best model parameter estimates. These tasks are often termed model discrimination/selection/validation/verification. Typically, several rival mechanistic models can explain data, so we incorporate available data and also run new experiments to gather more data. Design of dynamic experiments for model discrimination helps optimally collect data. For rival mechanistic models where we have access to gradient information, we extend existing methods to incorporate a wider range of problem uncertainty and show that our proposed approach is equivalent to historical approaches when limiting the types of considered uncertainty. We also consider rival mechanistic models as dynamic black boxes that we can evaluate, e.g. by running legacy code, but where gradient or other advanced information is unavailable. We replace these black-box models with Gaussian process surrogate models and thereby extend the model discrimination setting to additionally incorporate rival black-box model. We also explore the consequences of using Gaussian process surrogates to approximate gradient-based methods

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D
    corecore