37 research outputs found

    Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

    Full text link
    This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2, where xx is a vector, as well as the minimization of X+1/(2α)XF2||X||_*+1/(2\alpha)||X||_F^2, where XX is a matrix and X||X||_* and XF||X||_F are the nuclear and Frobenius norms of XX, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing x1||x||_1 and X||X||_* under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector x0x^0, minimizing x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2 returns (nearly) the same solution as minimizing x1||x||_1 almost whenever α10x0\alpha\ge 10||x^0||_\infty. The same relation also holds between minimizing X+1/(2α)XF2||X||_*+1/(2\alpha)||X||_F^2 and minimizing X||X||_* for recovering a (nearly) low-rank matrix X0X^0, if α10X02\alpha\ge 10||X^0||_2. Furthermore, we show that the linearized Bregman algorithm for minimizing x1+1/(2α)x22||x||_1+1/(2\alpha)||x||_2^2 subject to Ax=bAx=b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on AA. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author

    Almost-Euclidean subspaces of 1N\ell_1^N via tensor products: a simple approach to randomness reduction

    Get PDF
    It has been known since 1970's that the N-dimensional 1\ell_1-space contains nearly Euclidean subspaces whose dimension is Ω(N)\Omega(N). However, proofs of existence of such subspaces were probabilistic, hence non-constructive, which made the results not-quite-suitable for subsequently discovered applications to high-dimensional nearest neighbor search, error-correcting codes over the reals, compressive sensing and other computational problems. In this paper we present a "low-tech" scheme which, for any a>0a > 0, allows to exhibit nearly Euclidean Ω(N)\Omega(N)-dimensional subspaces of 1N\ell_1^N while using only NaN^a random bits. Our results extend and complement (particularly) recent work by Guruswami-Lee-Wigderson. Characteristic features of our approach include (1) simplicity (we use only tensor products) and (2) yielding "almost Euclidean" subspaces with arbitrarily small distortions.Comment: 11 pages; title change, abstract and references added, other minor change

    On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation

    Full text link
    We study classic streaming and sparse recovery problems using deterministic linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the latter also being known as l1-heavy hitters), norm estimation, and approximate inner product. We focus on devising a fixed matrix A in R^{m x n} and a deterministic recovery/estimation procedure which work for all possible input vectors simultaneously. Our results improve upon existing work, the following being our main contributions: * A proof that linf/l1 sparse recovery and inner product estimation are equivalent, and that incoherent matrices can be used to solve both problems. Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms by making use of the Fast Johnson-Lindenstrauss transform. Both our running times and number of measurements improve upon previous work. We can also obtain better error guarantees than previous work in terms of a smaller tail of the input vector. * A new lower bound for the number of linear measurements required to solve l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude. * A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of measurements required to solve deterministic norm estimation, i.e., to recover |x|_2 +/- eps|x|_1. For all the problems we study, tight bounds are already known for the randomized complexity from previous work, except in the case of l1/l1 sparse recovery, where a nearly tight bound is known. Our work thus aims to study the deterministic complexities of these problems

    Precision Tests of the Standard Model

    Get PDF
    30 páginas, 11 figuras, 11 tablas.-- Comunicación presentada al 25º Winter Meeting on Fundamental Physics celebrado del 3 al 8 de MArzo de 1997 en Formigal (España).Precision measurements of electroweak observables provide stringent tests of the Standard Model structure and an accurate determination of its parameters. An overview of the present experimental status is presented.This work has been supported in part by CICYT (Spain) under grant No. AEN-96-1718.Peer reviewe

    Strategic Learning for Active, Adaptive, and Autonomous Cyber Defense

    Full text link
    The increasing instances of advanced attacks call for a new defense paradigm that is active, autonomous, and adaptive, named as the \texttt{`3A'} defense paradigm. This chapter introduces three defense schemes that actively interact with attackers to increase the attack cost and gather threat information, i.e., defensive deception for detection and counter-deception, feedback-driven Moving Target Defense (MTD), and adaptive honeypot engagement. Due to the cyber deception, external noise, and the absent knowledge of the other players' behaviors and goals, these schemes possess three progressive levels of information restrictions, i.e., from the parameter uncertainty, the payoff uncertainty, to the environmental uncertainty. To estimate the unknown and reduce uncertainty, we adopt three different strategic learning schemes that fit the associated information restrictions. All three learning schemes share the same feedback structure of sensation, estimation, and actions so that the most rewarding policies get reinforced and converge to the optimal ones in autonomous and adaptive fashions. This work aims to shed lights on proactive defense strategies, lay a solid foundation for strategic learning under incomplete information, and quantify the tradeoff between the security and costs.Comment: arXiv admin note: text overlap with arXiv:1906.1218
    corecore