25,155 research outputs found

    Norms on complex matrices induced by random vectors II: extension of weakly unitarily invariant norms

    Full text link
    We improve and expand in two directions the theory of norms on complex matrices induced by random vectors. We first provide a simple proof of the classification of weakly unitarily invariant norms on the Hermitian matrices. We use this to extend the main theorem in [7] from exponent d≄2d\geq 2 to d≄1d \geq 1. Our proofs are much simpler than the originals: they do not require Lewis' framework for group invariance in convex matrix analysis. This clarification puts the entire theory on simpler foundations while extending its range of applicability.Comment: 10 page

    Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains

    Get PDF
    In this paper, we consider comparison-based adaptive stochastic algorithms for solving numerical optimisation problems. We consider a specific subclass of algorithms that we call comparison-based step-size adaptive randomized search (CB-SARS), where the state variables at a given iteration are a vector of the search space and a positive parameter, the step-size, typically controlling the overall standard deviation of the underlying search distribution.We investigate the linear convergence of CB-SARS on\emph{scaling-invariant} objective functions. Scaling-invariantfunctions preserve the ordering of points with respect to their functionvalue when the points are scaled with the same positive parameter (thescaling is done w.r.t. a fixed reference point). This class offunctions includes norms composed with strictly increasing functions aswell as many non quasi-convex and non-continuousfunctions. On scaling-invariant functions, we show the existence of ahomogeneous Markov chain, as a consequence of natural invarianceproperties of CB-SARS (essentially scale-invariance and invariance tostrictly increasing transformation of the objective function). We thenderive sufficient conditions for \emph{global linear convergence} ofCB-SARS, expressed in terms of different stability conditions of thenormalised homogeneous Markov chain (irreducibility, positivity, Harrisrecurrence, geometric ergodicity) and thus define a general methodologyfor proving global linear convergence of CB-SARS algorithms onscaling-invariant functions. As a by-product we provide aconnexion between comparison-based adaptive stochasticalgorithms and Markov chain Monte Carlo algorithms.Comment: SIAM Journal on Optimization, Society for Industrial and Applied Mathematics, 201

    Sparse phase retrieval via group-sparse optimization

    Get PDF
    This paper deals with sparse phase retrieval, i.e., the problem of estimating a vector from quadratic measurements under the assumption that few components are nonzero. In particular, we consider the problem of finding the sparsest vector consistent with the measurements and reformulate it as a group-sparse optimization problem with linear constraints. Then, we analyze the convex relaxation of the latter based on the minimization of a block l1-norm and show various exact recovery and stability results in the real and complex cases. Invariance to circular shifts and reflections are also discussed for real vectors measured via complex matrices

    Diagonality Measures of Hermitian Positive-Definite Matrices with Application to the Approximate Joint Diagonalization Problem

    Full text link
    In this paper, we introduce properly-invariant diagonality measures of Hermitian positive-definite matrices. These diagonality measures are defined as distances or divergences between a given positive-definite matrix and its diagonal part. We then give closed-form expressions of these diagonality measures and discuss their invariance properties. The diagonality measure based on the log-determinant α\alpha-divergence is general enough as it includes a diagonality criterion used by the signal processing community as a special case. These diagonality measures are then used to formulate minimization problems for finding the approximate joint diagonalizer of a given set of Hermitian positive-definite matrices. Numerical computations based on a modified Newton method are presented and commented

    Invariances in variance estimates

    Full text link
    We provide variants and improvements of the Brascamp-Lieb variance inequality which take into account the invariance properties of the underlying measure. This is applied to spectral gap estimates for log-concave measures with many symmetries and to non-interacting conservative spin systems
    • 

    corecore