10 research outputs found
Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.Comment: 29 pages, 8 figure
Task-Driven Adaptive Statistical Compressive Sensing of Gaussian Mixture Models
A framework for adaptive and non-adaptive statistical compressive sensing is
developed, where a statistical model replaces the standard sparsity model of
classical compressive sensing. We propose within this framework optimal
task-specific sensing protocols specifically and jointly designed for
classification and reconstruction. A two-step adaptive sensing paradigm is
developed, where online sensing is applied to detect the signal class in the
first step, followed by a reconstruction step adapted to the detected class and
the observed samples. The approach is based on information theory, here
tailored for Gaussian mixture models (GMMs), where an information-theoretic
objective relationship between the sensed signals and a representation of the
specific task of interest is maximized. Experimental results using synthetic
signals, Landsat satellite attributes, and natural images of different sizes
and with different noise levels show the improvements achieved using the
proposed framework when compared to more standard sensing protocols. The
underlying formulation can be applied beyond GMMs, at the price of higher
mathematical and computational complexity
Adaptivity Complexity for Causal Graph Discovery
Causal discovery from interventional data is an important problem, where the
task is to design an interventional strategy that learns the hidden ground
truth causal graph on nodes while minimizing the number of
performed interventions. Most prior interventional strategies broadly fall into
two categories: non-adaptive and adaptive. Non-adaptive strategies decide on a
single fixed set of interventions to be performed while adaptive strategies can
decide on which nodes to intervene on sequentially based on past interventions.
While adaptive algorithms may use exponentially fewer interventions than their
non-adaptive counterparts, there are practical concerns that constrain the
amount of adaptivity allowed. Motivated by this trade-off, we study the problem
of -adaptivity, where the algorithm designer recovers the causal graph under
a total of sequential rounds whilst trying to minimize the total number of
interventions. For this problem, we provide a -adaptive algorithm that
achieves approximation with
respect to the verification number, a well-known lower bound for adaptive
algorithms. Furthermore, for every , we show that our approximation is
tight. Our definition of -adaptivity interpolates nicely between the
non-adaptive () and fully adaptive () settings where our
approximation simplifies to and respectively, matching the
best-known approximation guarantees for both extremes. Our results also extend
naturally to the bounded size interventions.Comment: Accepted into UAI 202
A System Centric View of Modern Structured and Sparse Inference Tasks
University of Minnesota Ph.D. dissertation.June 2017. Major: Electrical/Computer Engineering. Advisor: Jarvis Haupt. 1 computer file (PDF); xii, 140 pages.We are living in the era of data deluge wherein we are collecting unprecedented amount of data from variety of sources. Modern inference tasks are centered around exploiting structure and sparsity in the data to extract relevant information. This thesis takes an end-to-end system centric view of these inference tasks which mainly consist of two sub-parts (i) data acquisition and (ii) data processing. In context of the data acquisition part of the system, we address issues pertaining to noise, clutter (the unwanted extraneous signals which accompany the desired signal), quantization, and missing observations. In the data processing part of the system we investigate the problems that arise in resource-constrained scenarios such as limited computational power and limited battery life. The first part of this thesis is centered around computationally-efficient approximations of a given linear dimensionality reduction (LDR) operator. In particular, we explore the partial circulant matrix (a matrix whose rows are related by circular shifts) based approximations as they allow for computationally-efficient implementations. We present several theoretical results that provide insight into existence of such approximations. We also propose a data-driven approach to numerically obtain such approximations and demonstrate the utility on real-life data. The second part of this thesis is focused around the issues of noise, missing observations, and quantization arising in matrix and tensor data. In particular, we propose a sparsity regularized maximum likelihood approach to completion of matrices following sparse factor models (matrices which can be expressed as a product of two matrices one of which is sparse). We provide general theoretical error bounds for the proposed approach which can be instantiated for variety of noise distributions. We also consider the problem of tensor completion and extend the results of matrix completion to the tensor setting. The problem of matrix completion from quantized and noisy observations is also investigated in as general terms as possible. We propose a constrained maximum likelihood approach to quantized matrix completion, provide probabilistic error bounds for this approach, and numerical algorithms which are used to provide numerical evidence for the proposed error bounds. The final part of this thesis is focused on issues related to clutter and limited battery life in signal acquisition. Specifically, we investigate the problem of compressive measurement design under a given sensing energy budget for estimating structured signals in structured clutter. We propose a novel approach that leverages the prior information about signal and clutter to judiciously allocate sensing energy to the compressive measurements. We also investigate the problem of processing Electrodermal Activity (EDA) signals recorded as the conductance over a user's skin. EDA signals contain information about the user's neuron ring and psychological state. These signals contain the desired information carrying signal superimposed with unwanted components which may be considered as clutter. We propose a novel compressed sensing based approach with provable error guarantees for processing EDA signals to extract relevant information, and demonstrate its efficacy, as compared to existing techniques, via numerical experiments
Structured Learning with Parsimony in Measurements and Computations: Theory, Algorithms, and Applications
University of Minnesota Ph.D. dissertation. July 2018. Major: Electrical Engineering. Advisor: Jarvis Haupt. 1 computer file (PDF); xvi, 289 pages.In modern ``Big Data'' applications, structured learning is the most widely employed methodology. Within this paradigm, the fundamental challenge lies in developing practical, effective algorithmic inference methods. Often (e.g., deep learning) successful heuristic-based approaches exist but theoretical studies are far behind, limiting understanding and potential improvements. In other settings (e.g., recommender systems) provably effective algorithmic methods exist, but the sheer sizes of datasets can limit their applicability. This twofold challenge motivates this work on developing new analytical and algorithmic methods for structured learning, with a particular focus on parsimony in measurements and computation, i.e., those requiring low storage and computational costs. Toward this end, we make efforts to investigate the theoretical properties of models and algorithms that present significant improvement in measurement and computation requirement. In particular, we first develop randomized approaches for dimensionality reduction on matrix and tensor data, which allow accurate estimation and inference procedures using significantly smaller data sizes that only depend on the intrinsic dimension (e.g., the rank of matrix/tensor) rather than the ambient ones. Our next effort is to study iterative algorithms for solving high dimensional learning problems, including both convex and nonconvex optimization. Using contemporary analysis techniques, we demonstrate guarantees of iteration complexities that are analogous to the low dimensional cases. In addition, we explore the landscape of nonconvex optimizations that exhibit computational advantages over their convex counterparts and characterize their properties from a general point of view in theory