8 research outputs found
Sequential Sensing with Model Mismatch
We characterize the performance of sequential information guided sensing,
Info-Greedy Sensing, when there is a mismatch between the true signal model and
the assumed model, which may be a sample estimate. In particular, we consider a
setup where the signal is low-rank Gaussian and the measurements are taken in
the directions of eigenvectors of the covariance matrix in a decreasing order
of eigenvalues. We establish a set of performance bounds when a mismatched
covariance matrix is used, in terms of the gap of signal posterior entropy, as
well as the additional amount of power required to achieve the same signal
recovery precision. Based on this, we further study how to choose an
initialization for Info-Greedy Sensing using the sample covariance matrix, or
using an efficient covariance sketching scheme.Comment: Submitted to IEEE for publicatio
Info-Greedy sequential adaptive compressed sensing
We present an information-theoretic framework for sequential adaptive
compressed sensing, Info-Greedy Sensing, where measurements are chosen to
maximize the extracted information conditioned on the previous measurements. We
show that the widely used bisection approach is Info-Greedy for a family of
-sparse signals by connecting compressed sensing and blackbox complexity of
sequential query algorithms, and present Info-Greedy algorithms for Gaussian
and Gaussian Mixture Model (GMM) signals, as well as ways to design sparse
Info-Greedy measurements. Numerical examples demonstrate the good performance
of the proposed algorithms using simulated and real data: Info-Greedy Sensing
shows significant improvement over random projection for signals with sparse
and low-rank covariance matrices, and adaptivity brings robustness when there
is a mismatch between the assumed and the true distributions.Comment: Preliminary results presented at Allerton Conference 2014. To appear
in IEEE Journal Selected Topics on Signal Processin
Adaptive Compressed Sensing for Support Recovery of Structured Sparse Sets
This paper investigates the problem of recovering the support of structured
signals via adaptive compressive sensing. We examine several classes of
structured support sets, and characterize the fundamental limits of accurately
recovering such sets through compressive measurements, while simultaneously
providing adaptive support recovery protocols that perform near optimally for
these classes. We show that by adaptively designing the sensing matrix we can
attain significant performance gains over non-adaptive protocols. These gains
arise from the fact that adaptive sensing can: (i) better mitigate the effects
of noise, and (ii) better capitalize on the structure of the support sets.Comment: to appear in IEEE Transactions on Information Theor
Structured Learning with Parsimony in Measurements and Computations: Theory, Algorithms, and Applications
University of Minnesota Ph.D. dissertation. July 2018. Major: Electrical Engineering. Advisor: Jarvis Haupt. 1 computer file (PDF); xvi, 289 pages.In modern ``Big Data'' applications, structured learning is the most widely employed methodology. Within this paradigm, the fundamental challenge lies in developing practical, effective algorithmic inference methods. Often (e.g., deep learning) successful heuristic-based approaches exist but theoretical studies are far behind, limiting understanding and potential improvements. In other settings (e.g., recommender systems) provably effective algorithmic methods exist, but the sheer sizes of datasets can limit their applicability. This twofold challenge motivates this work on developing new analytical and algorithmic methods for structured learning, with a particular focus on parsimony in measurements and computation, i.e., those requiring low storage and computational costs. Toward this end, we make efforts to investigate the theoretical properties of models and algorithms that present significant improvement in measurement and computation requirement. In particular, we first develop randomized approaches for dimensionality reduction on matrix and tensor data, which allow accurate estimation and inference procedures using significantly smaller data sizes that only depend on the intrinsic dimension (e.g., the rank of matrix/tensor) rather than the ambient ones. Our next effort is to study iterative algorithms for solving high dimensional learning problems, including both convex and nonconvex optimization. Using contemporary analysis techniques, we demonstrate guarantees of iteration complexities that are analogous to the low dimensional cases. In addition, we explore the landscape of nonconvex optimizations that exhibit computational advantages over their convex counterparts and characterize their properties from a general point of view in theory