839 research outputs found

    Wavelet Analysis and Denoising: New Tools for Economists

    Get PDF
    This paper surveys the techniques of wavelets analysis and the associated methods of denoising. The Discrete Wavelet Transform and its undecimated version, the Maximum Overlapping Discrete Wavelet Transform, are described. The methods of wavelets analysis can be used to show how the frequency content of the data varies with time. This allows us to pinpoint in time such events as major structural breaks. The sparse nature of the wavelets representation also facilitates the process of noise reduction by nonlinear wavelet shrinkage , which can be used to reveal the underlying trends in economic data. An application of these techniques to the UK real GDP (1873-2001) is described. The purpose of the analysis is to reveal the true structure of the data - including its local irregularities and abrupt changes - and the results are surprising.Wavelets, Denoising, Structural breaks, Trend estimation

    Semantic Security for Quantum Wiretap Channels

    Full text link
    We determine the semantic security capacity for quantum wiretap channels. We extend methods for classical channels to quantum channels to demonstrate that a strongly secure code guarantees a semantically secure code with the same secrecy rate. Furthermore, we show how to transform a non-secure code into a semantically secure code by means of biregular irreducible functions (BRI functions). We analyze semantic security for classical quantum channels and for quantum channels.Comment: v1: 38 pages, 2 figure

    Geometry and response of Lindbladians

    Get PDF
    Markovian reservoir engineering, in which time evolution of a quantum system is governed by a Lindblad master equation, is a powerful technique in studies of quantum phases of matter and quantum information. It can be used to drive a quantum system to a desired (unique) steady state, which can be an exotic phase of matter difficult to stabilize in nature. It can also be used to drive a system to a unitarily-evolving subspace, which can be used to store, protect, and process quantum information. In this paper, we derive a formula for the map corresponding to asymptotic (infinite-time) Lindbladian evolution and use it to study several important features of the unique state and subspace cases. We quantify how subspaces retain information about initial states and show how to use Lindbladians to simulate any quantum channels. We show that the quantum information in all subspaces can be successfully manipulated by small Hamiltonian perturbations, jump operator perturbations, or adiabatic deformations. We provide a Lindblad-induced notion of distance between adiabatically connected subspaces. We derive a Kubo formula governing linear response of subspaces to time-dependent Hamiltonian perturbations and determine cases in which this formula reduces to a Hamiltonian-based Kubo formula. As an application, we show that (for gapped systems) the zero-frequency Hall conductivity is unaffected by many types of Markovian dissipation. Finally, we show that the energy scale governing leakage out of the subspaces, resulting from either Hamiltonian/jump-operator perturbations or corrections to adiabatic evolution, is different from the conventional Lindbladian dissipative gap and, in certain cases, is equivalent to the excitation gap of a related Hamiltonian.Comment: Published version. See related talk at https://sites.google.com/site/victorvalbert/physics/diss_powerpoint.pd

    Comparing Experiments to the Fault-Tolerance Threshold

    Full text link
    Achieving error rates that meet or exceed the fault-tolerance threshold is a central goal for quantum computing experiments, and measuring these error rates using randomized benchmarking is now routine. However, direct comparison between measured error rates and thresholds is complicated by the fact that benchmarking estimates average error rates while thresholds reflect worst-case behavior when a gate is used as part of a large computation. These two measures of error can differ by orders of magnitude in the regime of interest. Here we facilitate comparison between the experimentally accessible average error rates and the worst-case quantities that arise in current threshold theorems by deriving relations between the two for a variety of physical noise sources. Our results indicate that it is coherent errors that lead to an enormous mismatch between average and worst case, and we quantify how well these errors must be controlled to ensure fair comparison between average error probabilities and fault-tolerance thresholds.Comment: 5 pages, 2 figures, 13 page appendi
    corecore