121 research outputs found

    Quantum data gathering

    Get PDF
    Measurement of a quantum system – the process by which an observer gathers information about it – provides a link between the quantum and classical worlds. The nature of this process is the central issue for attempts to reconcile quantum and classical descriptions of physical processes. Here, we show that the conventional paradigm of quantum measurement is directly responsible for a well-known disparity between the resources required to extract information from quantum and classical systems. We introduce a simple form of quantum data gathering, “coherent measurement”, that eliminates this disparity and restores a pleasing symmetry between classical and quantum statistical inference. To illustrate the power of quantum data gathering, we demonstrate that coherent measurements are optimal and strictly more powerful than conventional one-at-a-time measurements for the task of discriminating quantum states, including certain entangled many-body states (e.g., matrix product states)

    Minimax Quantum Tomography: Estimators and Relative Entropy Bounds

    Full text link
    © 2016 American Physical Society. A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/N) - in contrast to that of classical probability estimation, which is O(1/N) - where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states

    Compatibility of quantum states

    Full text link
    We introduce a measure of the compatibility between quantum states--the likelihood that two density matrices describe the same object. Our measure is motivated by two elementary requirements, which lead to a natural definition. We list some properties of this measure, and discuss its relation to the problem of combining two observers' states of knowledge.Comment: 4 pages, no figure

    Practical learning method for multi-scale entangled states

    Full text link
    We describe a method for reconstructing multi-scale entangled states from a small number of efficiently-implementable measurements and fast post-processing. The method only requires single particle measurements and the total number of measurements is polynomial in the number of particles. Data post-processing for state reconstruction uses standard tools, namely matrix diagonalisation and conjugate gradient method, and scales polynomially with the number of particles. Our method prevents the build-up of errors from both numerical and experimental imperfections

    Effect of nonnegativity on estimation errors in one-qubit state tomography with finite data

    Full text link
    We analyze the behavior of estimation errors evaluated by two loss functions, the Hilbert-Schmidt distance and infidelity, in one-qubit state tomography with finite data. We show numerically that there can be a large gap between the estimation errors and those predicted by an asymptotic analysis. The origin of this discrepancy is the existence of the boundary in the state space imposed by the requirement that density matrices be nonnegative (positive semidefinite). We derive an explicit form of a function reproducing the behavior of the estimation errors with high accuracy by introducing two approximations: a Gaussian approximation of the multinomial distributions of outcomes, and linearizing the boundary. This function gives us an intuition for the behavior of the expected losses for finite data sets. We show that this function can be used to determine the amount of data necessary for the estimation to be treated reliably with the asymptotic theory. We give an explicit expression for this amount, which exhibits strong sensitivity to the true quantum state as well as the choice of measurement.Comment: 9 pages, 4 figures, One figure (FIG. 1) is added to the previous version, and some typos are correcte

    Optimal, reliable estimation of quantum states

    Get PDF
    Accurately inferring the state of a quantum device from the results of measurements is a crucial task in building quantum information processing hardware. The predominant state estimation procedure, maximum likelihood estimation (MLE), generally reports an estimate with zero eigenvalues. These cannot be justified. Furthermore, the MLE estimate is incompatible with error bars, so conclusions drawn from it are suspect. I propose an alternative procedure, Bayesian mean estimation (BME). BME never yields zero eigenvalues, its eigenvalues provide a bound on their own uncertainties, and it is the most accurate procedure possible. I show how to implement BME numerically, and how to obtain natural error bars that are compatible with the estimate. Finally, I briefly discuss the differences between Bayesian and frequentist estimation techniques.Comment: RevTeX; 14 pages, 2 embedded figures. Comments enthusiastically welcomed

    Spectral thresholding quantum tomography for low rank states

    Get PDF
    The estimation of high dimensional quantum states is an important statistical problem arising in current quantum technology applications. A key example is the tomography of multiple ions states, employed in the validation of state preparation in ion trap experiments (HĂ€ffner et al 2005 Nature 438 643). Since full tomography becomes unfeasible even for a small number of ions, there is a need to investigate lower dimensional statistical models which capture prior information about the state, and to devise estimation methods tailored to such models. In this paper we propose several new methods aimed at the efficient estimation of low rank states and analyse their performance for multiple ions tomography. All methods consist in first computing the least squares estimator, followed by its truncation to an appropriately chosen smaller rank. The latter is done by setting eigenvalues below a certain 'noise level' to zero, while keeping the rest unchanged, or normalizing them appropriately. We show that (up to logarithmic factors in the space dimension) the mean square error of the resulting estimators scales as where r is the rank, is the dimension of the Hilbert space, and N is the number of quantum samples. Furthermore we establish a lower bound for the asymptotic minimax risk which shows that the above scaling is optimal. The performance of the estimators is analysed in an extensive simulations study, with emphasis on the dependence on the state rank, and the number of measurement repetitions. We find that all estimators perform significantly better than the least squares, with the 'physical estimator' (which is a bona fide density matrix) slightly outperforming the other estimators

    Exponential speed-up with a single bit of quantum information: Testing the quantum butterfly effect

    Full text link
    We present an efficient quantum algorithm to measure the average fidelity decay of a quantum map under perturbation using a single bit of quantum information. Our algorithm scales only as the complexity of the map under investigation, so for those maps admitting an efficient gate decomposition, it provides an exponential speed up over known classical procedures. Fidelity decay is important in the study of complex dynamical systems, where it is conjectured to be a signature of quantum chaos. Our result also illustrates the role of chaos in the process of decoherence.Comment: 4 pages, 2 eps figure

    Scalable Noise Estimation with Random Unitary Operators

    Full text link
    We describe a scalable stochastic method for the experimental measurement of generalized fidelities characterizing the accuracy of the implementation of a coherent quantum transformation. The method is based on the motion reversal of random unitary operators. In the simplest case our method enables direct estimation of the average gate fidelity. The more general fidelities are characterized by a universal exponential rate of fidelity loss. In all cases the measurable fidelity decrease is directly related to the strength of the noise affecting the implementation -- quantified by the trace of the superoperator describing the non--unitary dynamics. While the scalability of our stochastic protocol makes it most relevant in large Hilbert spaces (when quantum process tomography is infeasible), our method should be immediately useful for evaluating the degree of control that is achievable in any prototype quantum processing device. By varying over different experimental arrangements and error-correction strategies additional information about the noise can be determined.Comment: 8 pages; v2: published version (typos corrected; reference added
    • 

    corecore