5,010 research outputs found
The power of symmetric extensions for entanglement detection
In this paper, we present new progress on the study of the symmetric
extension criterion for separability. First, we show that a perturbation of
order O(1/N) is sufficient and, in general, necessary to destroy the
entanglement of any state admitting an N Bose symmetric extension. On the other
hand, the minimum amount of local noise necessary to induce separability on
states arising from N Bose symmetric extensions with Positive Partial Transpose
(PPT) decreases at least as fast as O(1/N^2). From these results, we derive
upper bounds on the time and space complexity of the weak membership problem of
separability when attacked via algorithms that search for PPT symmetric
extensions. Finally, we show how to estimate the error we incur when we
approximate the set of separable states by the set of (PPT) N -extendable
quantum states in order to compute the maximum average fidelity in pure state
estimation problems, the maximal output purity of quantum channels, and the
geometric measure of entanglement.Comment: see Video Abstract at
http://www.quantiki.org/video_abstracts/0906273
A study of the classification of low-dimensional data with supervised manifold learning
Supervised manifold learning methods learn data representations by preserving
the geometric structure of data while enhancing the separation between data
samples from different classes. In this work, we propose a theoretical study of
supervised manifold learning for classification. We consider nonlinear
dimensionality reduction algorithms that yield linearly separable embeddings of
training data and present generalization bounds for this type of algorithms. A
necessary condition for satisfactory generalization performance is that the
embedding allow the construction of a sufficiently regular interpolation
function in relation with the separation margin of the embedding. We show that
for supervised embeddings satisfying this condition, the classification error
decays at an exponential rate with the number of training samples. Finally, we
examine the separability of supervised nonlinear embeddings that aim to
preserve the low-dimensional geometric structure of data based on graph
representations. The proposed analysis is supported by experiments on several
real data sets
Complete hierarchies of efficient approximations to problems in entanglement theory
We investigate several problems in entanglement theory from the perspective
of convex optimization. This list of problems comprises (A) the decision
whether a state is multi-party entangled, (B) the minimization of expectation
values of entanglement witnesses with respect to pure product states, (C) the
closely related evaluation of the geometric measure of entanglement to quantify
pure multi-party entanglement, (D) the test whether states are multi-party
entangled on the basis of witnesses based on second moments and on the basis of
linear entropic criteria, and (E) the evaluation of instances of maximal output
purities of quantum channels. We show that these problems can be formulated as
certain optimization problems: as polynomially constrained problems employing
polynomials of degree three or less. We then apply very recently established
known methods from the theory of semi-definite relaxations to the formulated
optimization problems. By this construction we arrive at a hierarchy of
efficiently solvable approximations to the solution, approximating the exact
solution as closely as desired, in a way that is asymptotically complete. For
example, this results in a hierarchy of novel, efficiently decidable sufficient
criteria for multi-particle entanglement, such that every entangled state will
necessarily be detected in some step of the hierarchy. Finally, we present
numerical examples to demonstrate the practical accessibility of this approach.Comment: 14 pages, 3 figures, tiny modifications, version to be published in
Physical Review
Largest separable balls around the maximally mixed bipartite quantum state
For finite-dimensional bipartite quantum systems, we find the exact size of
the largest balls, in spectral norms for , of
separable (unentangled) matrices around the identity matrix. This implies a
simple and intutively meaningful geometrical sufficient condition for
separability of bipartite density matrices: that their purity \tr \rho^2 not
be too large. Theoretical and experimental applications of these results
include algorithmic problems such as computing whether or not a state is
entangled, and practical ones such as obtaining information about the existence
or nature of entanglement in states reached by NMR quantum computation
implementations or other experimental situations.Comment: 7 pages, LaTeX. Motivation and verbal description of results and
their implications expanded and improved; one more proof included. This
version differs from the PRA version by the omission of some erroneous
sentences outside the theorems and proofs, which will be noted in an erratum
notice in PRA (and by minor notational differences
Learning Co-Sparse Analysis Operators with Separable Structures
In the co-sparse analysis model a set of filters is applied to a signal out
of the signal class of interest yielding sparse filter responses. As such, it
may serve as a prior in inverse problems, or for structural analysis of signals
that are known to belong to the signal class. The more the model is adapted to
the class, the more reliable it is for these purposes. The task of learning
such operators for a given class is therefore a crucial problem. In many
applications, it is also required that the filter responses are obtained in a
timely manner, which can be achieved by filters with a separable structure. Not
only can operators of this sort be efficiently used for computing the filter
responses, but they also have the advantage that less training samples are
required to obtain a reliable estimate of the operator. The first contribution
of this work is to give theoretical evidence for this claim by providing an
upper bound for the sample complexity of the learning process. The second is a
stochastic gradient descent (SGD) method designed to learn an analysis operator
with separable structures, which includes a novel and efficient step size
selection rule. Numerical experiments are provided that link the sample
complexity to the convergence speed of the SGD algorithm.Comment: 11 pages double column, 4 figures, 3 table
- âŠ