297 research outputs found

    Toward a unified theory of sparse dimensionality reduction in Euclidean space

    Get PDF
    Let ΦRm×n\Phi\in\mathbb{R}^{m\times n} be a sparse Johnson-Lindenstrauss transform [KN14] with ss non-zeroes per column. For a subset TT of the unit sphere, ε(0,1/2)\varepsilon\in(0,1/2) given, we study settings for m,sm,s required to ensure EΦsupxTΦx221<ε, \mathop{\mathbb{E}}_\Phi \sup_{x\in T} \left|\|\Phi x\|_2^2 - 1 \right| < \varepsilon , i.e. so that Φ\Phi preserves the norm of every xTx\in T simultaneously and multiplicatively up to 1+ε1+\varepsilon. We introduce a new complexity parameter, which depends on the geometry of TT, and show that it suffices to choose ss and mm such that this parameter is small. Our result is a sparse analog of Gordon's theorem, which was concerned with a dense Φ\Phi having i.i.d. Gaussian entries. We qualitatively unify several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based restricted isometries. Our work also implies new results in using the sparse Johnson-Lindenstrauss transform in numerical linear algebra, classical and model-based compressed sensing, manifold learning, and constrained least squares problems such as the Lasso

    lp-Recovery of the Most Significant Subspace among Multiple Subspaces with Outliers

    Full text link
    We assume data sampled from a mixture of d-dimensional linear subspaces with spherically symmetric distributions within each subspace and an additional outlier component with spherically symmetric distribution within the ambient space (for simplicity we may assume that all distributions are uniform on their corresponding unit spheres). We also assume mixture weights for the different components. We say that one of the underlying subspaces of the model is most significant if its mixture weight is higher than the sum of the mixture weights of all other subspaces. We study the recovery of the most significant subspace by minimizing the lp-averaged distances of data points from d-dimensional subspaces, where p>0. Unlike other lp minimization problems, this minimization is non-convex for all p>0 and thus requires different methods for its analysis. We show that if 0<p<=1, then for any fraction of outliers the most significant subspace can be recovered by lp minimization with overwhelming probability (which depends on the generating distribution and its parameters). We show that when adding small noise around the underlying subspaces the most significant subspace can be nearly recovered by lp minimization for any 0<p<=1 with an error proportional to the noise level. On the other hand, if p>1 and there is more than one underlying subspace, then with overwhelming probability the most significant subspace cannot be recovered or nearly recovered. This last result does not require spherically symmetric outliers.Comment: This is a revised version of the part of 1002.1994 that deals with single subspace recovery. V3: Improved estimates (in particular for Lemma 3.1 and for estimates relying on it), asymptotic dependence of probabilities and constants on D and d and further clarifications; for simplicity it assumes uniform distributions on spheres. V4: minor revision for the published versio

    Precision Tests of the Standard Model

    Get PDF
    30 páginas, 11 figuras, 11 tablas.-- Comunicación presentada al 25º Winter Meeting on Fundamental Physics celebrado del 3 al 8 de MArzo de 1997 en Formigal (España).Precision measurements of electroweak observables provide stringent tests of the Standard Model structure and an accurate determination of its parameters. An overview of the present experimental status is presented.This work has been supported in part by CICYT (Spain) under grant No. AEN-96-1718.Peer reviewe

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Necessary and sufficient conditions of solution uniqueness in 1\ell_1 minimization

    Full text link
    This paper shows that the solutions to various convex 1\ell_1 minimization problems are \emph{unique} if and only if a common set of conditions are satisfied. This result applies broadly to the basis pursuit model, basis pursuit denoising model, Lasso model, as well as other 1\ell_1 models that either minimize f(Axb)f(Ax-b) or impose the constraint f(Axb)σf(Ax-b)\leq\sigma, where ff is a strictly convex function. For these models, this paper proves that, given a solution xx^* and defining I=\supp(x^*) and s=\sign(x^*_I), xx^* is the unique solution if and only if AIA_I has full column rank and there exists yy such that AITy=sA_I^Ty=s and aiTy<1|a_i^Ty|_\infty<1 for i∉Ii\not\in I. This condition is previously known to be sufficient for the basis pursuit model to have a unique solution supported on II. Indeed, it is also necessary, and applies to a variety of other 1\ell_1 models. The paper also discusses ways to recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte

    Estimation in high dimensions: a geometric perspective

    Full text link
    This tutorial provides an exposition of a flexible geometric framework for high dimensional estimation problems with constraints. The tutorial develops geometric intuition about high dimensional sets, justifies it with some results of asymptotic convex geometry, and demonstrates connections between geometric results and estimation problems. The theory is illustrated with applications to sparse recovery, matrix completion, quantization, linear and logistic regression and generalized linear models.Comment: 56 pages, 9 figures. Multiple minor change

    Nonmonotone Barzilai-Borwein Gradient Algorithm for 1\ell_1-Regularized Nonsmooth Minimization in Compressive Sensing

    Full text link
    This paper is devoted to minimizing the sum of a smooth function and a nonsmooth 1\ell_1-regularized term. This problem as a special cases includes the 1\ell_1-regularized convex minimization problem in signal processing, compressive sensing, machine learning, data mining, etc. However, the non-differentiability of the 1\ell_1-norm causes more challenging especially in large problems encountered in many practical applications. This paper proposes, analyzes, and tests a Barzilai-Borwein gradient algorithm. At each iteration, the generated search direction enjoys descent property and can be easily derived by minimizing a local approximal quadratic model and simultaneously taking the favorable structure of the 1\ell_1-norm. Moreover, a nonmonotone line search technique is incorporated to find a suitable stepsize along this direction. The algorithm is easily performed, where the values of the objective function and the gradient of the smooth term are required at per-iteration. Under some conditions, the proposed algorithm is shown to be globally convergent. The limited experiments by using some nonconvex unconstrained problems from CUTEr library with additive 1\ell_1-regularization illustrate that the proposed algorithm performs quite well. Extensive experiments for 1\ell_1-regularized least squares problems in compressive sensing verify that our algorithm compares favorably with several state-of-the-art algorithms which are specifically designed in recent years.Comment: 20 page

    Optimizing over the Growing Spectrahedron

    Full text link

    Efficient and feasible state tomography of quantum many-body systems

    Full text link
    We present a novel method to perform quantum state tomography for many-particle systems which are particularly suitable for estimating states in lattice systems such as of ultra-cold atoms in optical lattices. We show that the need for measuring a tomographically complete set of observables can be overcome by letting the state evolve under some suitably chosen random circuits followed by the measurement of a single observable. We generalize known results about the approximation of unitary 2-designs, i.e., certain classes of random unitary matrices, by random quantum circuits and connect our findings to the theory of quantum compressed sensing. We show that for ultra-cold atoms in optical lattices established techniques like optical super-lattices, laser speckles, and time-of-flight measurements are sufficient to perform fully certified, assumption-free tomography. Combining our approach with tensor network methods - in particular the theory of matrix-product states - we identify situations where the effort of reconstruction is even constant in the number of lattice sites, allowing in principle to perform tomography on large-scale systems readily available in present experiments.Comment: 10 pages, 3 figures, minor corrections, discussion added, emphasizing that no single-site addressing is needed at any stage of the scheme when implemented in optical lattice system
    corecore