8,800 research outputs found

    An elementary approach to polynomial optimization on polynomial meshes

    Get PDF
    A polynomial mesh on a multivariate compact set or manifold is a sequence of finite norming sets for polynomials whose norming constant is independent of degree. We apply the recently developed theory of polynomial meshes to an elementary discrete approach for polynomial optimization on nonstandard domains, providing a rigorous (over)estimate of the convergence rate. Examples include surface/solid subregions of sphere or torus, such as caps, lenses, lunes, and slices

    Worst-case examples for Lasserre's measure--based hierarchy for polynomial optimization on the hypercube

    Get PDF
    We study the convergence rate of a hierarchy of upper bounds for polynomial optimization problems, proposed by Lasserre [SIAM J. Optim. 21(3) (2011), pp. 864-885], and a related hierarchy by De Klerk, Hess and Laurent [SIAM J. Optim. 27(1), (2017) pp. 347-367]. For polynomial optimization over the hypercube, we show a refined convergence analysis for the first hierarchy. We also show lower bounds on the convergence rate for both hierarchies on a class of examples. These lower bounds match the upper bounds and thus establish the true rate of convergence on these examples. Interestingly, these convergence rates are determined by the distribution of extremal zeroes of certain families of orthogonal polynomials.Comment: 17 pages, no figure

    Computational Hardness of Certifying Bounds on Constrained PCA Problems

    Get PDF
    Given a random n×n symmetric matrix W drawn from the Gaussian orthogonal ensemble (GOE), we consider the problem of certifying an upper bound on the maximum value of the quadratic form x⊀Wx over all vectors x in a constraint set S⊂Rn. For a certain class of normalized constraint sets S we show that, conditional on certain complexity-theoretic assumptions, there is no polynomial-time algorithm certifying a better upper bound than the largest eigenvalue of W. A notable special case included in our results is the hypercube S={±1/n−−√}n, which corresponds to the problem of certifying bounds on the Hamiltonian of the Sherrington-Kirkpatrick spin glass model from statistical physics. Our proof proceeds in two steps. First, we give a reduction from the detection problem in the negatively-spiked Wishart model to the above certification problem. We then give evidence that this Wishart detection problem is computationally hard below the classical spectral threshold, by showing that no low-degree polynomial can (in expectation) distinguish the spiked and unspiked models. This method for identifying computational thresholds was proposed in a sequence of recent works on the sum-of-squares hierarchy, and is believed to be correct for a large class of problems. Our proof can be seen as constructing a distribution over symmetric matrices that appears computationally indistinguishable from the GOE, yet is supported on matrices whose maximum quadratic form over x∈S is much larger than that of a GOE matrix.ISSN:1868-896

    A Constrained L1 Minimization Approach to Sparse Precision Matrix Estimation

    Get PDF
    A constrained L1 minimization method is proposed for estimating a sparse inverse covariance matrix based on a sample of nn iid pp-variate random variables. The resulting estimator is shown to enjoy a number of desirable properties. In particular, it is shown that the rate of convergence between the estimator and the true ss-sparse precision matrix under the spectral norm is slog⁡p/ns\sqrt{\log p/n} when the population distribution has either exponential-type tails or polynomial-type tails. Convergence rates under the elementwise L∞L_{\infty} norm and Frobenius norm are also presented. In addition, graphical model selection is considered. The procedure is easily implementable by linear programming. Numerical performance of the estimator is investigated using both simulated and real data. In particular, the procedure is applied to analyze a breast cancer dataset. The procedure performs favorably in comparison to existing methods.Comment: To appear in Journal of the American Statistical Associatio

    A new equilibrated residual method improving accuracy and efficiency of flux-free error estimates

    Get PDF
    This paper presents a new methodology to compute guaranteed upper bounds for the energy norm of the error in the context of linear finite element approximations of the reaction–diffusion equation. The new approach revisits the ideas in ParĂ©s et al. (2009) [6, 4], with the goal of substantially reducing the computational cost of the flux-free method while retaining the good quality of the bounds. The new methodology provides also a technique to compute equilibrated boundary tractions improving the quality of standard equilibration strategies. The zeroth-order equilibration conditions are imposed using an alternative less restrictive form of the first-order equilibration conditions, along with a new efficient minimization criterion. This new equilibration strategy provides much more accurate upper bounds for the energy and requires only doubling the dimension of the local linear systems of equations to be solved.Postprint (author's final draft
    • 

    corecore