805 research outputs found
Quantum Detection with Unknown States
We address the problem of distinguishing among a finite collection of quantum
states, when the states are not entirely known. For completely specified
states, necessary and sufficient conditions on a quantum measurement minimizing
the probability of a detection error have been derived. In this work, we assume
that each of the states in our collection is a mixture of a known state and an
unknown state. We investigate two criteria for optimality. The first is
minimization of the worst-case probability of a detection error. For the second
we assume a probability distribution on the unknown states, and minimize of the
expected probability of a detection error.
We find that under both criteria, the optimal detectors are equivalent to the
optimal detectors of an ``effective ensemble''. In the worst-case, the
effective ensemble is comprised of the known states with altered prior
probabilities, and in the average case it is made up of altered states with the
original prior probabilities.Comment: Refereed version. Improved numerical examples and figures. A few
typos fixe
On the existence of 0/1 polytopes with high semidefinite extension complexity
In Rothvo\ss{} it was shown that there exists a 0/1 polytope (a polytope
whose vertices are in \{0,1\}^{n}) such that any higher-dimensional polytope
projecting to it must have 2^{\Omega(n)} facets, i.e., its linear extension
complexity is exponential. The question whether there exists a 0/1 polytope
with high PSD extension complexity was left open. We answer this question in
the affirmative by showing that there is a 0/1 polytope such that any
spectrahedron projecting to it must be the intersection of a semidefinite cone
of dimension~2^{\Omega(n)} and an affine space. Our proof relies on a new
technique to rescale semidefinite factorizations
DDGun: An untrained method for the prediction of protein stability changes upon single and multiple point variations
Background: Predicting the effect of single point variations on protein stability constitutes a crucial step toward understanding the relationship between protein structure and function. To this end, several methods have been developed to predict changes in the Gibbs free energy of unfolding (\u3b4\u3b4G) between wild type and variant proteins, using sequence and structure information. Most of the available methods however do not exhibit the anti-symmetric prediction property, which guarantees that the predicted \u3b4\u3b4G value for a variation is the exact opposite of that predicted for the reverse variation, i.e., \u3b4\u3b4G(A \u2192 B) = -\u3b4\u3b4G(B \u2192 A), where A and B are amino acids. Results: Here we introduce simple anti-symmetric features, based on evolutionary information, which are combined to define an untrained method, DDGun (DDG untrained). DDGun is a simple approach based on evolutionary information that predicts the \u3b4\u3b4G for single and multiple variations from sequence and structure information (DDGun3D). Our method achieves remarkable performance without any training on the experimental datasets, reaching Pearson correlation coefficients between predicted and measured \u3b4\u3b4G values of ~ 0.5 and ~ 0.4 for single and multiple site variations, respectively. Surprisingly, DDGun performances are comparable with those of state of the art methods. DDGun also naturally predicts multiple site variations, thereby defining a benchmark method for both single site and multiple site predictors. DDGun is anti-symmetric by construction predicting the value of the \u3b4\u3b4G of a reciprocal variation as almost equal (depending on the sequence profile) to -\u3b4\u3b4G of the direct variation. This is a valuable property that is missing in the majority of the methods. Conclusions: Evolutionary information alone combined in an untrained method can achieve remarkably high performances in the prediction of \u3b4\u3b4G upon protein mutation. Non-trained approaches like DDGun represent a valid benchmark both for scoring the predictive power of the individual features and for assessing the learning capability of supervised methods
Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates
In this paper, we provide a novel construction of the linear-sized spectral
sparsifiers of Batson, Spielman and Srivastava [BSS14]. While previous
constructions required running time [BSS14, Zou12], our
sparsification routine can be implemented in almost-quadratic running time
.
The fundamental conceptual novelty of our work is the leveraging of a strong
connection between sparsification and a regret minimization problem over
density matrices. This connection was known to provide an interpretation of the
randomized sparsifiers of Spielman and Srivastava [SS11] via the application of
matrix multiplicative weight updates (MWU) [CHS11, Vis14]. In this paper, we
explain how matrix MWU naturally arises as an instance of the
Follow-the-Regularized-Leader framework and generalize this approach to yield a
larger class of updates. This new class allows us to accelerate the
construction of linear-sized spectral sparsifiers, and give novel insights on
the motivation behind Batson, Spielman and Srivastava [BSS14]
On representations of the feasible set in convex optimization
We consider the convex optimization problem where is convex, the feasible set K is convex and Slater's
condition holds, but the functions are not necessarily convex. We show
that for any representation of K that satisfies a mild nondegeneracy
assumption, every minimizer is a Karush-Kuhn-Tucker (KKT) point and conversely
every KKT point is a minimizer. That is, the KKT optimality conditions are
necessary and sufficient as in convex programming where one assumes that the
are convex. So in convex optimization, and as far as one is concerned
with KKT points, what really matters is the geometry of K and not so much its
representation.Comment: to appear in Optimization Letter
Separable Multipartite Mixed States - Operational Asymptotically Necessary and Sufficient Conditions
We introduce an operational procedure to determine, with arbitrary
probability and accuracy, optimal entanglement witness for every multipartite
entangled state. This method provides an operational criterion for separability
which is asymptotically necessary and sufficient. Our results are also
generalized to detect all different types of multipartite entanglement.Comment: 4 pages, 2 figures, submitted to Physical Review Letters. Revised
version with new calculation
Optimal Design of Robust Combinatorial Mechanisms for Substitutable Goods
In this paper we consider multidimensional mechanism design problem for
selling discrete substitutable items to a group of buyers. Previous work on
this problem mostly focus on stochastic description of valuations used by the
seller. However, in certain applications, no prior information regarding
buyers' preferences is known. To address this issue, we consider uncertain
valuations and formulate the problem in a robust optimization framework: the
objective is to minimize the maximum regret. For a special case of
revenue-maximizing pricing problem we present a solution method based on
mixed-integer linear programming formulation
Algorithm Engineering in Robust Optimization
Robust optimization is a young and emerging field of research having received
a considerable increase of interest over the last decade. In this paper, we
argue that the the algorithm engineering methodology fits very well to the
field of robust optimization and yields a rewarding new perspective on both the
current state of research and open research directions.
To this end we go through the algorithm engineering cycle of design and
analysis of concepts, development and implementation of algorithms, and
theoretical and experimental evaluation. We show that many ideas of algorithm
engineering have already been applied in publications on robust optimization.
Most work on robust optimization is devoted to analysis of the concepts and the
development of algorithms, some papers deal with the evaluation of a particular
concept in case studies, and work on comparison of concepts just starts. What
is still a drawback in many papers on robustness is the missing link to include
the results of the experiments again in the design
Optimal quantum detectors for unambiguous detection of mixed states
We consider the problem of designing an optimal quantum detector that
distinguishes unambiguously between a collection of mixed quantum states. Using
arguments of duality in vector space optimization, we derive necessary and
sufficient conditions for an optimal measurement that maximizes the probability
of correct detection. We show that the previous optimal measurements that were
derived for certain special cases satisfy these optimality conditions. We then
consider state sets with strong symmetry properties, and show that the optimal
measurement operators for distinguishing between these states share the same
symmetries, and can be computed very efficiently by solving a reduced size
semidefinite program.Comment: Submitted to Phys. Rev.
- …