11,882 research outputs found
Forward stagewise regression and the monotone lasso
We consider the least angle regression and forward stagewise algorithms for
solving penalized least squares regression problems. In Efron, Hastie,
Johnstone & Tibshirani (2004) it is proved that the least angle regression
algorithm, with a small modification, solves the lasso regression problem. Here
we give an analogous result for incremental forward stagewise regression,
showing that it solves a version of the lasso problem that enforces
monotonicity. One consequence of this is as follows: while lasso makes optimal
progress in terms of reducing the residual sum-of-squares per unit increase in
-norm of the coefficient , forward stage-wise is optimal per unit
arc-length traveled along the coefficient path. We also study a condition
under which the coefficient paths of the lasso are monotone, and hence the
different algorithms coincide. Finally, we compare the lasso and forward
stagewise procedures in a simulation study involving a large number of
correlated predictors.Comment: Published at http://dx.doi.org/10.1214/07-EJS004 in the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Distributed Decision Through Self-Synchronizing Sensor Networks in the Presence of Propagation Delays and Asymmetric Channels
In this paper we propose and analyze a distributed algorithm for achieving
globally optimal decisions, either estimation or detection, through a
self-synchronization mechanism among linearly coupled integrators initialized
with local measurements. We model the interaction among the nodes as a directed
graph with weights (possibly) dependent on the radio channels and we pose
special attention to the effect of the propagation delay occurring in the
exchange of data among sensors, as a function of the network geometry. We
derive necessary and sufficient conditions for the proposed system to reach a
consensus on globally optimal decision statistics. One of the major results
proved in this work is that a consensus is reached with exponential convergence
speed for any bounded delay condition if and only if the directed graph is
quasi-strongly connected. We provide a closed form expression for the global
consensus, showing that the effect of delays is, in general, the introduction
of a bias in the final decision. Finally, we exploit our closed form expression
to devise a double-step consensus mechanism able to provide an unbiased
estimate with minimum extra complexity, without the need to know or estimate
the channel parameters.Comment: To be published on IEEE Transactions on Signal Processin
An Algorithmic Theory of Dependent Regularizers, Part 1: Submodular Structure
We present an exploration of the rich theoretical connections between several
classes of regularized models, network flows, and recent results in submodular
function theory. This work unifies key aspects of these problems under a common
theory, leading to novel methods for working with several important models of
interest in statistics, machine learning and computer vision.
In Part 1, we review the concepts of network flows and submodular function
optimization theory foundational to our results. We then examine the
connections between network flows and the minimum-norm algorithm from
submodular optimization, extending and improving several current results. This
leads to a concise representation of the structure of a large class of pairwise
regularized models important in machine learning, statistics and computer
vision.
In Part 2, we describe the full regularization path of a class of penalized
regression problems with dependent variables that includes the graph-guided
LASSO and total variation constrained models. This description also motivates a
practical algorithm. This allows us to efficiently find the regularization path
of the discretized version of TV penalized models. Ultimately, our new
algorithms scale up to high-dimensional problems with millions of variables
- …