481 research outputs found

    Comment: Classifier Technology and the Illusion of Progress

    Full text link
    Comment on Classifier Technology and the Illusion of Progress [math.ST/0606441]Comment: Published at http://dx.doi.org/10.1214/088342306000000024 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Sparse inverse covariance estimation with the lasso

    Full text link
    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm that is remarkably fast: in the worst cases, it solves a 1000 node problem (~500,000 parameters) in about a minute, and is 50 to 2000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinhausen and Buhlmann (2006). We illustrate the method on some cell-signaling data from proteomics.Comment: submitte

    Comment: Classifier Technology and the Illusion of Progress

    Full text link
    Comment on Classifier Technology and the Illusion of Progress [math.ST/0606441]Comment: Published at http://dx.doi.org/10.1214/088342306000000042 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Pathwise coordinate optimization

    Full text link
    We consider ``one-at-a-time'' coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the L1L_1-penalized regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the ``fused lasso,'' however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS131 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Regularization Paths for Generalized Linear Models via Coordinate Descent

    Get PDF
    We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multi- nomial regression problems while the penalties include âÂÂ_1 (the lasso), âÂÂ_2 (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.

    Regularized Discriminant Analysis

    Get PDF
    • …
    corecore