38,816 research outputs found
Multiple-copy state discrimination: Thinking globally, acting locally
We theoretically investigate schemes to discriminate between two
nonorthogonal quantum states given multiple copies. We consider a number of
state discrimination schemes as applied to nonorthogonal, mixed states of a
qubit. In particular, we examine the difference that local and global
optimization of local measurements makes to the probability of obtaining an
erroneous result, in the regime of finite numbers of copies , and in the
asymptotic limit as . Five schemes are considered:
optimal collective measurements over all copies, locally optimal local
measurements in a fixed single-qubit measurement basis, globally optimal fixed
local measurements, locally optimal adaptive local measurements, and globally
optimal adaptive local measurements. Here, adaptive measurements are those for
which the measurement basis can depend on prior measurement results. For each
of these measurement schemes we determine the probability of error (for finite
) and scaling of this error in the asymptotic limit. In the asymptotic
limit, adaptive schemes have no advantage over the optimal fixed local scheme,
and except for states with less than 2% mixture, the most naive scheme (locally
optimal fixed local measurements) is as good as any noncollective scheme. For
finite , however, the most sophisticated local scheme (globally optimal
adaptive local measurements) is better than any other noncollective scheme, for
any degree of mixture.Comment: 11 pages, 14 figure
Recommended from our members
A Metaheuristic Adaptive Cubature Based Algorithm to Find Bayesian Optimal Designs for Nonlinear Models
Finding Bayesian optimal designs for nonlinear models is a difficult task because the optimality criteriontypically requires us to evaluate complex integrals before we perform a constrained optimization. Wepropose a hybridized method where we combine an adaptive multidimensional integration algorithm anda metaheuristic algorithm called imperialist competitive algorithm to find Bayesian optimal designs. Weapply our numerical method to a few challenging design problems to demonstrate its efficiency. Theyinclude finding D-optimal designs for an item response model commonly used in education, Bayesianoptimal designs for survivalmodels, and Bayesian optimal designs for a four-parameter sigmoid Emax doseresponse model. Supplementary materials for this article are available online and they contain an R packagefor implementing the proposed algorithm and codes for reproducing all the results in this paper
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
- …