30 research outputs found

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Practical High-Throughput, Non-Adaptive and Noise-Robust SARS-CoV-2 Testing

    Full text link
    We propose a compressed sensing-based testing approach with a practical measurement design and a tuning-free and noise-robust algorithm for detecting infected persons. Compressed sensing results can be used to provably detect a small number of infected persons among a possibly large number of people. There are several advantages of this method compared to classical group testing. Firstly, it is non-adaptive and thus possibly faster to perform than adaptive methods which is crucial in exponentially growing pandemic phases. Secondly, due to nonnegativity of measurements and an appropriate noise model, the compressed sensing problem can be solved with the non-negative least absolute deviation regression (NNLAD) algorithm. This convex tuning-free program requires the same number of tests as current state of the art group testing methods. Empirically it performs significantly better than theoretically guaranteed, and thus the high-throughput, reducing the number of tests to a fraction compared to other methods. Further, numerical evidence suggests that our method can correct sparsely occurring errors.Comment: 8 Pages, 1 Figur

    Singular Value Approximation and Sparsifying Random Walks on Directed Graphs

    Full text link
    In this paper, we introduce a new, spectral notion of approximation between directed graphs, which we call singular value (SV) approximation. SV-approximation is stronger than previous notions of spectral approximation considered in the literature, including spectral approximation of Laplacians for undirected graphs (Spielman Teng STOC 2004), standard approximation for directed graphs (Cohen et. al. STOC 2017), and unit-circle approximation for directed graphs (Ahmadinejad et. al. FOCS 2020). Further, SV approximation enjoys several useful properties not possessed by previous notions of approximation, e.g., it is preserved under products of random-walk matrices and bounded matrices. We provide a nearly linear-time algorithm for SV-sparsifying (and hence UC-sparsifying) Eulerian directed graphs, as well as ℓ\ell-step random walks on such graphs, for any ℓ≤poly(n)\ell\leq \text{poly}(n). Combined with the Eulerian scaling algorithms of (Cohen et. al. FOCS 2018), given an arbitrary (not necessarily Eulerian) directed graph and a set SS of vertices, we can approximate the stationary probability mass of the (S,Sc)(S,S^c) cut in an ℓ\ell-step random walk to within a multiplicative error of 1/polylog(n)1/\text{polylog}(n) and an additive error of 1/poly(n)1/\text{poly}(n) in nearly linear time. As a starting point for these results, we provide a simple black-box reduction from SV-sparsifying Eulerian directed graphs to SV-sparsifying undirected graphs; such a directed-to-undirected reduction was not known for previous notions of spectral approximation.Comment: FOCS 202

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Rigorous optimization recipes for sparse and low rank inverse problems with applications in data sciences

    Get PDF
    Many natural and man-made signals can be described as having a few degrees of freedom relative to their size due to natural parameterizations or constraints; examples include bandlimited signals, collections of signals observed from multiple viewpoints in a network-of-sensors, and per-flow traffic measurements of the Internet. Low-dimensional models (LDMs) mathematically capture the inherent structure of such signals via combinatorial and geometric data models, such as sparsity, unions-of-subspaces, low-rankness, manifolds, and mixtures of factor analyzers, and are emerging to revolutionize the way we treat inverse problems (e.g., signal recovery, parameter estimation, or structure learning) from dimensionality-reduced or incomplete data. Assuming our problem resides in a LDM space, in this thesis we investigate how to integrate such models in convex and non-convex optimization algorithms for significant gains in computational complexity. We mostly focus on two LDMs: (i)(i) sparsity and (ii)(ii) low-rankness. We study trade-offs and their implications to develop efficient and provable optimization algorithms, and--more importantly--to exploit convex and combinatorial optimization that can enable cross-pollination of decades of research in both
    corecore