7,731 research outputs found
Average quantum dynamics of closed systems over stochastic Hamiltonians
We develop a master equation formalism to describe the evolution of the
average density matrix of a closed quantum system driven by a stochastic
Hamiltonian. The average over random processes generally results in decoherence
effects in closed system dynamics, in addition to the usual unitary evolution.
We then show that, for an important class of problems in which the Hamiltonian
is proportional to a Gaussian random process, the 2nd-order master equation
yields exact dynamics. The general formalism is applied to study the examples
of a two-level system, two atoms in a stochastic magnetic field and the heating
of a trapped ion.Comment: 17 pages, 1 figure, submitted to Physical Review
Do all states undergo sudden death of entanglement at finite temperature?
In this paper we consider the decay of quantum entanglement, quantified by
the concurrence, of a pair of two-level systems each of which is interacting
with a reservoir at finite temperature T. For a broad class of initially
entangled states, we demonstrate that the system always becomes disentangled in
a finite time i.e."entanglement sudden death" (ESD) occurs. This class includes
all states which previously had been found to have long-lived entanglement in
zero temperature reservoirs. Our general result is illustrated by an example.Comment: 4 pages, 3 figure
Nonexistence of Entanglement Sudden Death in High NOON States
We study the dynamics of entanglement in continuous variable quantum systems
(CVQS). Specifically, we study the phenomena of Entanglement Sudden Death (ESD)
in general two-mode-N-photon states undergoing pure dephasing. We show that for
these states, ESD never occurs. These states are generalizations of the
so-called High NOON states, shown to decrease the Rayleigh limit of lambda to
lambda/N, which promises great improvement in resolution of interference
patterns if states with large N are physically realized. However, we show that
in dephasing NOON states, the time to reach V_crit, critical visibility, scales
inversely with N^2. On the practical level, this shows that as N increases, the
visibility degrades much faster, which is likely to be a considerable drawback
for any practical application of these states.Comment: 4 pages, 1 figur
Mean and Variance of Photon Counting with Deadtime
The statistics of photon counting by systems affected by deadtime are potentially important for statistical image reconstruction methods. We present a new way of analyzing the moments of the counting process for a counter system affected by various models of deadtime related to PET and SPECT imaging. We derive simple and exact expressions for the first and second moments of the number of recorded events under various models. From our mean expression for a SPECT deadtime model, we derive a simple estimator for the actual intensity of the underlying Poisson process; simulations show that our estimator is unbiased even for extremely high count rates.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85820/1/Fessler158.pd
Theory of the low- and high-field superconducting phases of UTe
Recent nuclear magnetic resonance (NMR) and calorimetric experiments have
observed that UTe exhibits a transition between two distinct
superconducting phases as a function of magnetic field strength for a field
applied along the crystalline -axis. To determine the nature of these
phases, we employ a microscopic two-band minimal Hamiltonian with the essential
crystal symmetries and structural details. We also adopt anisotropic
ferromagnetic exchange terms. We study the resulting pairing symmetries and
properties of these low- and high-field phases in mean field theory
Sparse Horseshoe Estimation via Expectation-Maximisation
The horseshoe prior is known to possess many desirable properties for
Bayesian estimation of sparse parameter vectors, yet its density function lacks
an analytic form. As such, it is challenging to find a closed-form solution for
the posterior mode. Conventional horseshoe estimators use the posterior mean to
estimate the parameters, but these estimates are not sparse. We propose a novel
expectation-maximisation (EM) procedure for computing the MAP estimates of the
parameters in the case of the standard linear model. A particular strength of
our approach is that the M-step depends only on the form of the prior and it is
independent of the form of the likelihood. We introduce several simple
modifications of this EM procedure that allow for straightforward extension to
generalised linear models. In experiments performed on simulated and real data,
our approach performs comparable, or superior to, state-of-the-art sparse
estimation methods in terms of statistical performance and computational cost
Bayes beats Cross Validation: Efficient and Accurate Ridge Regression via Expectation Maximization
We present a novel method for tuning the regularization hyper-parameter,
, of a ridge regression that is faster to compute than leave-one-out
cross-validation (LOOCV) while yielding estimates of the regression parameters
of equal, or particularly in the setting of sparse covariates, superior quality
to those obtained by minimising the LOOCV risk. The LOOCV risk can suffer from
multiple and bad local minima for finite and thus requires the
specification of a set of candidate , which can fail to provide good
solutions. In contrast, we show that the proposed method is guaranteed to find
a unique optimal solution for large enough , under relatively mild
conditions, without requiring the specification of any difficult to determine
hyper-parameters. This is based on a Bayesian formulation of ridge regression
that we prove to have a unimodal posterior for large enough , allowing for
both the optimal and the regression coefficients to be jointly
learned within an iterative expectation maximization (EM) procedure.
Importantly, we show that by utilizing an appropriate preprocessing step, a
single iteration of the main EM loop can be implemented in
operations, for input data with rows and columns. In contrast,
evaluating a single value of using fast LOOCV costs
operations when using the same preprocessing. This advantage amounts to an
asymptotic improvement of a factor of for candidate values for
(in the regime where is the number of
regression targets)
Edge-Preserving Tomographic Reconstruction with Nonlocal Regularization
Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the ill posedness of the reconstruction problem, a roughness penalty is often imposed on the solution to control noise. To avoid smoothing of edges, which are important image attributes, various edge-preserving regularization methods have been proposed. Most of these schemes rely on information from local neighborhoods to determine the presence of edges. In this paper, we propose a cost function that incorporates nonlocal boundary information into the regularization method. We use an alternating minimization algorithm with deterministic annealing to minimize the proposed cost function, jointly estimating region boundaries and object pixel values. We apply variational techniques implemented using level-sets methods to update the boundary estimates; then, using the most recent boundary estimate, we minimize a space-variant quadratic cost function to update the image estimate. For the positron emission tomography transmission reconstruction application, we compare the bias-variance tradeoff of this method with that of a "conventional" penalized-likelihood algorithm with local Huber roughness penalty.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85989/1/Fessler73.pd
Maximum-Likelihood Transmission Image Reconstruction for Overlapping Transmission Beams
In many transmission imaging geometries, the transmitted "beams" of photons overlap on the detector, such that a detector element may record photons that originated in different sources or source locations and thus traversed different paths through the object. Examples include systems based on scanning line sources or on multiple parallel rod sources. The overlap of these beams has been disregarded by both conventional analytical reconstruction methods as well as by previous statistical reconstruction methods. The authors propose a new algorithm for statistical image reconstruction of attenuation maps that explicitly accounts for overlapping beams in transmission scans. The algorithm is guaranteed to monotonically increase the objective function at each iteration. The availability of this algorithm enables the possibility of deliberately increasing the beam overlap so as to increase count rates. Simulated single photon emission tomography transmission scans based on a multiple line source array demonstrate that the proposed method yields improved resolution/noise tradeoffs relative to "conventional" reconstruction algorithms, both statistical and nonstatistical.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85818/1/Fessler78.pd
Nematic Bogoliubov Fermi surfaces from magnetic toroidal order in FeSeS
Recently it has been argued that the superconducting state of
FeSeS exhibits Bogoliubov Fermi surfaces for . These
Bogoliubov Fermi surfaces appear together with broken time-reversal symmetry
and surprisingly demonstrate nematic behavior in a structurally tetragonal
phase. Through a symmetry-based analysis of Bogoliubov Fermi surfaces that can
arise from broken time-reversal symmetry, we argue that the likely origin of
time-reversal symmetry breaking is due to magnetic toroidal order. We show that
this magnetic toroidal order naturally appears as a consequence of either
static N\'{e}el antiferromagnetic order or due to the formation of a
spontaneous pair density wave superconducting order. Finally, we reveal that
independent of the presence of Bogoliubov Fermi surfaces, supercurrents will
induce N\'{e}el magnetic order in many Fe-based superconductors
- …