6,007 research outputs found
Online Active Linear Regression via Thresholding
We consider the problem of online active learning to collect data for
regression modeling. Specifically, we consider a decision maker with a limited
experimentation budget who must efficiently learn an underlying linear
population model. Our main contribution is a novel threshold-based algorithm
for selection of most informative observations; we characterize its performance
and fundamental lower bounds. We extend the algorithm and its guarantees to
sparse linear regression in high-dimensional settings. Simulations suggest the
algorithm is remarkably robust: it provides significant benefits over passive
random sampling in real-world datasets that exhibit high nonlinearity and high
dimensionality --- significantly reducing both the mean and variance of the
squared error.Comment: Published in AAAI 201
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
A Parallelizable Acceleration Framework for Packing Linear Programs
This paper presents an acceleration framework for packing linear programming
problems where the amount of data available is limited, i.e., where the number
of constraints m is small compared to the variable dimension n. The framework
can be used as a black box to speed up linear programming solvers dramatically,
by two orders of magnitude in our experiments. We present worst-case guarantees
on the quality of the solution and the speedup provided by the algorithm,
showing that the framework provides an approximately optimal solution while
running the original solver on a much smaller problem. The framework can be
used to accelerate exact solvers, approximate solvers, and parallel/distributed
solvers. Further, it can be used for both linear programs and integer linear
programs
- …