2,132 research outputs found
Sparse Multivariate Factor Regression
We consider the problem of multivariate regression in a setting where the
relevant predictors could be shared among different responses. We propose an
algorithm which decomposes the coefficient matrix into the product of a long
matrix and a wide matrix, with an elastic net penalty on the former and an
penalty on the latter. The first matrix linearly transforms the
predictors to a set of latent factors, and the second one regresses the
responses on these factors. Our algorithm simultaneously performs dimension
reduction and coefficient estimation and automatically estimates the number of
latent factors from the data. Our formulation results in a non-convex
optimization problem, which despite its flexibility to impose effective
low-dimensional structure, is difficult, or even impossible, to solve exactly
in a reasonable time. We specify an optimization algorithm based on alternating
minimization with three different sets of updates to solve this non-convex
problem and provide theoretical results on its convergence and optimality.
Finally, we demonstrate the effectiveness of our algorithm via experiments on
simulated and real data
Dictionary-based Tensor Canonical Polyadic Decomposition
To ensure interpretability of extracted sources in tensor decomposition, we
introduce in this paper a dictionary-based tensor canonical polyadic
decomposition which enforces one factor to belong exactly to a known
dictionary. A new formulation of sparse coding is proposed which enables high
dimensional tensors dictionary-based canonical polyadic decomposition. The
benefits of using a dictionary in tensor decomposition models are explored both
in terms of parameter identifiability and estimation accuracy. Performances of
the proposed algorithms are evaluated on the decomposition of simulated data
and the unmixing of hyperspectral images
A Sparse Bayesian Estimation Framework for Conditioning Prior Geologic Models to Nonlinear Flow Measurements
We present a Bayesian framework for reconstruction of subsurface hydraulic
properties from nonlinear dynamic flow data by imposing sparsity on the
distribution of the solution coefficients in a compression transform domain
Lossless Linear Analog Compression
We establish the fundamental limits of lossless linear analog compression by
considering the recovery of random vectors
from the noiseless linear
measurements
with
measurement matrix . Specifically,
for a random vector of arbitrary
distribution we show that can be recovered with
zero error probability from
linear measurements,
where denotes the lower
modified Minkowski dimension and the infimum is over all sets
with . This achievability statement holds for Lebesgue almost all measurement
matrices . We then show that -rectifiable random vectors---a
stochastic generalization of -sparse vectors---can be recovered with zero
error probability from linear measurements. From classical compressed
sensing theory we would expect to be necessary for successful
recovery of . Surprisingly, certain classes of
-rectifiable random vectors can be recovered from fewer than
measurements. Imposing an additional regularity condition on the distribution
of -rectifiable random vectors , we do get the
expected converse result of measurements being necessary. The resulting
class of random vectors appears to be new and will be referred to as
-analytic random vectors
Blind Compressed Sensing Over a Structured Union of Subspaces
This paper addresses the problem of simultaneous signal recovery and
dictionary learning based on compressive measurements. Multiple signals are
analyzed jointly, with multiple sensing matrices, under the assumption that the
unknown signals come from a union of a small number of disjoint subspaces. This
problem is important, for instance, in image inpainting applications, in which
the multiple signals are constituted by (incomplete) image patches taken from
the overall image. This work extends standard dictionary learning and
block-sparse dictionary optimization, by considering compressive measurements,
e.g., incomplete data). Previous work on blind compressed sensing is also
generalized by using multiple sensing matrices and relaxing some of the
restrictions on the learned dictionary. Drawing on results developed in the
context of matrix completion, it is proven that both the dictionary and signals
can be recovered with high probability from compressed measurements. The
solution is unique up to block permutations and invertible linear
transformations of the dictionary atoms. The recovery is contingent on the
number of measurements per signal and the number of signals being sufficiently
large; bounds are derived for these quantities. In addition, this paper
presents a computationally practical algorithm that performs dictionary
learning and signal recovery, and establishes conditions for its convergence to
a local optimum. Experimental results for image inpainting demonstrate the
capabilities of the method
- …