17,890 research outputs found

    Degree of separability of bipartite quantum states

    Full text link
    We investigate the problem of finding the optimal convex decomposition of a bipartite quantum state into a separable part and a positive remainder, in which the weight of the separable part is maximal. This weight is naturally identified with the degree of separability of the state. In a recent work, the problem was solved for two-qubit states using semidefinite programming. In this paper, we describe a procedure to obtain the optimal decomposition of a bipartite state of any finite dimension via a sequence of semidefinite relaxations. The sequence of decompositions thus obtained is shown to converge to the optimal one. This provides, for the first time, a systematic method to determine the so-called optimal Lewenstein-Sanpera decomposition of any bipartite state. Numerical results are provided to illustrate this procedure, and the special case of rank-2 states is also discussed.Comment: 11 pages, 7 figures, submitted to PR

    Covariance Estimation: The GLM and Regularization Perspectives

    Get PDF
    Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covariance estimation, particularly in the recent high-dimensional data environment where enforcing the positive-definiteness constraint could be computationally expensive. We provide a survey of the progress made in modeling covariance matrices from two relatively complementary perspectives: (1) generalized linear models (GLM) or parsimony and use of covariates in low dimensions, and (2) regularization or sparsity for high-dimensional data. An emerging, unifying and powerful trend in both perspectives is that of reducing a covariance estimation problem to that of estimating a sequence of regression problems. We point out several instances of the regression-based formulation. A notable case is in sparse estimation of a precision matrix or a Gaussian graphical model leading to the fast graphical LASSO algorithm. Some advantages and limitations of the regression-based Cholesky decomposition relative to the classical spectral (eigenvalue) and variance-correlation decompositions are highlighted. The former provides an unconstrained and statistically interpretable reparameterization, and guarantees the positive-definiteness of the estimated covariance matrix. It reduces the unintuitive task of covariance estimation to that of modeling a sequence of regressions at the cost of imposing an a priori order among the variables. Elementwise regularization of the sample covariance matrix such as banding, tapering and thresholding has desirable asymptotic properties and the sparse estimated covariance matrix is positive definite with probability tending to one for large samples and dimensions.Comment: Published in at http://dx.doi.org/10.1214/11-STS358 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Covariant gauge fixing and Kuchar decomposition

    Get PDF
    The symplectic geometry of a broad class of generally covariant models is studied. The class is restricted so that the gauge group of the models coincides with the Bergmann-Komar group and the analysis can focus on the general covariance. A geometrical definition of gauge fixing at the constraint manifold is given; it is equivalent to a definition of a background (spacetime) manifold for each topological sector of a model. Every gauge fixing defines a decomposition of the constraint manifold into the physical phase space and the space of embeddings of the Cauchy manifold into the background manifold (Kuchar decomposition). Extensions of every gauge fixing and the associated Kuchar decomposition to a neighbourhood of the constraint manifold are shown to exist.Comment: Revtex, 35 pages, no figure

    Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization

    Full text link
    The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derived directly from cooperative optimization, a newly discovered general method for global/combinatorial optimization. This approach provides us another theoretical basis for the algorithm and offers new insights on its power and limitation. It also gives us a general framework for designing new decoding algorithms.Comment: Accepted by IEEE Information Theory Workshop, Chengdu, China, 200

    Maximum block improvement and polynomial optimization

    Get PDF

    Relative Entropy Relaxations for Signomial Optimization

    Full text link
    Signomial programs (SPs) are optimization problems specified in terms of signomials, which are weighted sums of exponentials composed with linear functionals of a decision variable. SPs are non-convex optimization problems in general, and families of NP-hard problems can be reduced to SPs. In this paper we describe a hierarchy of convex relaxations to obtain successively tighter lower bounds of the optimal value of SPs. This sequence of lower bounds is computed by solving increasingly larger-sized relative entropy optimization problems, which are convex programs specified in terms of linear and relative entropy functions. Our approach relies crucially on the observation that the relative entropy function -- by virtue of its joint convexity with respect to both arguments -- provides a convex parametrization of certain sets of globally nonnegative signomials with efficiently computable nonnegativity certificates via the arithmetic-geometric-mean inequality. By appealing to representation theorems from real algebraic geometry, we show that our sequences of lower bounds converge to the global optima for broad classes of SPs. Finally, we also demonstrate the effectiveness of our methods via numerical experiments
    corecore