6,242 research outputs found
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
Recommended from our members
A review of portfolio planning: Models and systems
In this chapter, we first provide an overview of a number of portfolio planning models
which have been proposed and investigated over the last forty years. We revisit the
mean-variance (M-V) model of Markowitz and the construction of the risk-return
efficient frontier. A piecewise linear approximation of the problem through a
reformulation involving diagonalisation of the quadratic form into a variable
separable function is also considered. A few other models, such as, the Mean
Absolute Deviation (MAD), the Weighted Goal Programming (WGP) and the
Minimax (MM) model which use alternative metrics for risk are also introduced,
compared and contrasted. Recently asymmetric measures of risk have gained in
importance; we consider a generic representation and a number of alternative
symmetric and asymmetric measures of risk which find use in the evaluation of
portfolios. There are a number of modelling and computational considerations which
have been introduced into practical portfolio planning problems. These include: (a)
buy-in thresholds for assets, (b) restriction on the number of assets (cardinality
constraints), (c) transaction roundlot restrictions. Practical portfolio models may also
include (d) dedication of cashflow streams, and, (e) immunization which involves
duration matching and convexity constraints. The modelling issues in respect of these
features are discussed. Many of these features lead to discrete restrictions involving
zero-one and general integer variables which make the resulting model a quadratic
mixed-integer programming model (QMIP). The QMIP is a NP-hard problem; the
algorithms and solution methods for this class of problems are also discussed. The
issues of preparing the analytic data (financial datamarts) for this family of portfolio
planning problems are examined. We finally present computational results which
provide some indication of the state-of-the-art in the solution of portfolio optimisation
problems
Forward stagewise regression and the monotone lasso
We consider the least angle regression and forward stagewise algorithms for
solving penalized least squares regression problems. In Efron, Hastie,
Johnstone & Tibshirani (2004) it is proved that the least angle regression
algorithm, with a small modification, solves the lasso regression problem. Here
we give an analogous result for incremental forward stagewise regression,
showing that it solves a version of the lasso problem that enforces
monotonicity. One consequence of this is as follows: while lasso makes optimal
progress in terms of reducing the residual sum-of-squares per unit increase in
-norm of the coefficient , forward stage-wise is optimal per unit
arc-length traveled along the coefficient path. We also study a condition
under which the coefficient paths of the lasso are monotone, and hence the
different algorithms coincide. Finally, we compare the lasso and forward
stagewise procedures in a simulation study involving a large number of
correlated predictors.Comment: Published at http://dx.doi.org/10.1214/07-EJS004 in the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A Unified Framework of Constrained Regression
Generalized additive models (GAMs) play an important role in modeling and
understanding complex relationships in modern applied statistics. They allow
for flexible, data-driven estimation of covariate effects. Yet researchers
often have a priori knowledge of certain effects, which might be monotonic or
periodic (cyclic) or should fulfill boundary conditions. We propose a unified
framework to incorporate these constraints for both univariate and bivariate
effect estimates and for varying coefficients. As the framework is based on
component-wise boosting methods, variables can be selected intrinsically, and
effects can be estimated for a wide range of different distributional
assumptions. Bootstrap confidence intervals for the effect estimates are
derived to assess the models. We present three case studies from environmental
sciences to illustrate the proposed seamless modeling framework. All discussed
constrained effect estimates are implemented in the comprehensive R package
mboost for model-based boosting.Comment: This is a preliminary version of the manuscript. The final
publication is available at
http://link.springer.com/article/10.1007/s11222-014-9520-
Best Subset Selection via a Modern Optimization Lens
In the last twenty-five years (1990-2014), algorithmic advances in integer
optimization combined with hardware improvements have resulted in an
astonishing 200 billion factor speedup in solving Mixed Integer Optimization
(MIO) problems. We present a MIO approach for solving the classical best subset
selection problem of choosing out of features in linear regression
given observations. We develop a discrete extension of modern first order
continuous optimization methods to find high quality feasible solutions that we
use as warm starts to a MIO solver that finds provably optimal solutions. The
resulting algorithm (a) provides a solution with a guarantee on its
suboptimality even if we terminate the algorithm early, (b) can accommodate
side constraints on the coefficients of the linear regression and (c) extends
to finding best subset solutions for the least absolute deviation loss
function. Using a wide variety of synthetic and real datasets, we demonstrate
that our approach solves problems with in the 1000s and in the 100s in
minutes to provable optimality, and finds near optimal solutions for in the
100s and in the 1000s in minutes. We also establish via numerical
experiments that the MIO approach performs better than {\texttt {Lasso}} and
other popularly used sparse learning procedures, in terms of achieving sparse
solutions with good predictive power.Comment: This is a revised version (May, 2015) of the first submission in June
201
Learning weights in the generalized OWA operators
This paper discusses identification of parameters of generalized ordered weighted averaging (GOWA) operators from empirical data. Similarly to ordinary OWA operators, GOWA are characterized by a vector of weights, as well as the power to which the arguments are raised. We develop optimization techniques which allow one to fit such operators to the observed data. We also generalize these methods for functional defined GOWA and generalized Choquet integral based aggregation operators.<br /
- …