55 research outputs found
Quantum system characterization with limited resources
The construction and operation of large scale quantum information devices
presents a grand challenge. A major issue is the effective control of coherent
evolution, which requires accurate knowledge of the system dynamics that may
vary from device to device. We review strategies for obtaining such knowledge
from minimal initial resources and in an efficient manner, and apply these to
the problem of characterization of a qubit embedded into a larger state
manifold, made tractable by exploiting prior structural knowledge. We also
investigate adaptive sampling for estimation of multiple parameters
Alfven wave scattering and the secondary to primary ratio
The cosmic ray abundances have traditionally been used to determine the elemental and isotopic nature of galactic ray sources and average measures of propagation conditions. Detailed studies of the physics of propagation are usually paired with relatively straightforward estimates of the secondary-to-primary (S/P) ratios. In the work reported here, calculations of elemental abundances are paired with a more careful treatment of the propagation process. It is shown that the physics of propagation does indeed leave specific traces of Galactic structure in cosmic ray abundances
Use and Abuse of the Fisher Information Matrix in the Assessment of Gravitational-Wave Parameter-Estimation Prospects
The Fisher-matrix formalism is used routinely in the literature on
gravitational-wave detection to characterize the parameter-estimation
performance of gravitational-wave measurements, given parametrized models of
the waveforms, and assuming detector noise of known colored Gaussian
distribution. Unfortunately, the Fisher matrix can be a poor predictor of the
amount of information obtained from typical observations, especially for
waveforms with several parameters and relatively low expected signal-to-noise
ratios (SNR), or for waveforms depending weakly on one or more parameters, when
their priors are not taken into proper consideration. In this paper I discuss
these pitfalls; show how they occur, even for relatively strong signals, with a
commonly used template family for binary-inspiral waveforms; and describe
practical recipes to recognize them and cope with them.
Specifically, I answer the following questions: (i) What is the significance
of (quasi-)singular Fisher matrices, and how must we deal with them? (ii) When
is it necessary to take into account prior probability distributions for the
source parameters? (iii) When is the signal-to-noise ratio high enough to
believe the Fisher-matrix result? In addition, I provide general expressions
for the higher-order, beyond--Fisher-matrix terms in the 1/SNR expansions for
the expected parameter accuracies.Comment: 24 pages, 3 figures, previously known as "A User Manual for the
Fisher Information Matrix"; final, corrected PRD versio
Quantum System Identification by Bayesian Analysis of Noisy Data: Beyond Hamiltonian Tomography
We consider how to characterize the dynamics of a quantum system from a
restricted set of initial states and measurements using Bayesian analysis.
Previous work has shown that Hamiltonian systems can be well estimated from
analysis of noisy data. Here we show how to generalize this approach to systems
with moderate dephasing in the eigenbasis of the Hamiltonian. We illustrate the
process for a range of three-level quantum systems. The results suggest that
the Bayesian estimation of the frequencies and dephasing rates is generally
highly accurate and the main source of errors are errors in the reconstructed
Hamiltonian basis.Comment: 6 pages, 3 figure
Consistent Application of Maximum Entropy to Quantum-Monte-Carlo Data
Bayesian statistics in the frame of the maximum entropy concept has widely
been used for inferential problems, particularly, to infer dynamic properties
of strongly correlated fermion systems from Quantum-Monte-Carlo (QMC) imaginary
time data. In current applications, however, a consistent treatment of the
error-covariance of the QMC data is missing. Here we present a closed Bayesian
approach to account consistently for the QMC-data.Comment: 13 pages, RevTeX, 2 uuencoded PostScript figure
The Evolution of Distorted Rotating Black Holes II: Dynamics and Analysis
We have developed a numerical code to study the evolution of distorted,
rotating black holes. This code is used to evolve a new family of black hole
initial data sets corresponding to distorted ``Kerr'' holes with a wide range
of rotation parameters, and distorted Schwarzschild black holes with odd-parity
radiation. Rotating black holes with rotation parameters as high as
are evolved and analyzed in this paper. The evolutions are generally carried
out to about , where is the ADM mass. We have extracted both the
even- and odd-parity gravitational waveforms, and find the quasinormal modes of
the holes to be excited in all cases. We also track the apparent horizons of
the black holes, and find them to be a useful tool for interpreting the
numerical results. We are able to compute the masses of the black holes from
the measurements of their apparent horizons, as well as the total energy
radiated and find their sum to be in excellent agreement with the ADM mass.Comment: 26 pages, LaTeX with RevTeX 3.0 macros. 27 uuencoded gz-compressed
postscript figures. Also available at http://jean-luc.ncsa.uiuc.edu/Papers/
Submitted to Physical Review
Maximum Entropy and Bayesian Data Analysis: Entropic Priors
The problem of assigning probability distributions which objectively reflect
the prior information available about experiments is one of the major stumbling
blocks in the use of Bayesian methods of data analysis. In this paper the
method of Maximum (relative) Entropy (ME) is used to translate the information
contained in the known form of the likelihood into a prior distribution for
Bayesian inference. The argument is inspired and guided by intuition gained
from the successful use of ME methods in statistical mechanics. For experiments
that cannot be repeated the resulting "entropic prior" is formally identical
with the Einstein fluctuation formula. For repeatable experiments, however, the
expected value of the entropy of the likelihood turns out to be relevant
information that must be included in the analysis. The important case of a
Gaussian likelihood is treated in detail.Comment: 23 pages, 2 figure
Bayesian Inference in Processing Experimental Data: Principles and Basic Applications
This report introduces general ideas and some basic methods of the Bayesian
probability theory applied to physics measurements. Our aim is to make the
reader familiar, through examples rather than rigorous formalism, with concepts
such as: model comparison (including the automatic Ockham's Razor filter
provided by the Bayesian approach); parametric inference; quantification of the
uncertainty about the value of physical quantities, also taking into account
systematic effects; role of marginalization; posterior characterization;
predictive distributions; hierarchical modelling and hyperparameters; Gaussian
approximation of the posterior and recovery of conventional methods, especially
maximum likelihood and chi-square fits under well defined conditions; conjugate
priors, transformation invariance and maximum entropy motivated priors; Monte
Carlo estimates of expectation, including a short introduction to Markov Chain
Monte Carlo methods.Comment: 40 pages, 2 figures, invited paper for Reports on Progress in Physic
Ubiquitous problem of learning system parameters for dissipative two-level quantum systems: Fourier analysis versus Bayesian estimation
We compare the accuracy, precision and reliability of different methods for
estimating key system parameters for two-level systems subject to Hamiltonian
evolution and decoherence. It is demonstrated that the use of Bayesian
modelling and maximum likelihood estimation is superior to common techniques
based on Fourier analysis. Even for simple two-parameter estimation problems,
the Bayesian approach yields higher accuracy and precision for the parameter
estimates obtained. It requires less data, is more flexible in dealing with
different model systems, can deal better with uncertainty in initial conditions
and measurements, and enables adaptive refinement of the estimates. The
comparison results shows that this holds for measurements of large ensembles of
spins and atoms limited by Gaussian noise as well as projection noise limited
data from repeated single-shot measurements of a single quantum device
- …