5,114 research outputs found

    Fast Candidate Points Selection in the LASSO Path

    Get PDF
    The LASSO sparse regression method has recently received attention in a variety of applications from image compression techniques to parameter estimation problems. This paper addresses the problem of regularization parameter selection in this method in a general case of complex-valued regressors and bases. Generally, this parameter controls the degree of sparsity or equivalently, the estimated model order. However, with the same sparsity/model order, the smallest regularization parameter is desired. We relate such points to the nonsmooth points in the path of LASSO solutions and give an analytical expression for them. Then, we introduce a numerically fast method of approximating the desired points by a recursive algorithm. The procedure decreases the necessary number of solutions of the LASSO problem dramatically, which is an important issue due to the polynomial computational cost of the convex optimization techniques. We illustrate our method in the context of DOA estimation

    The group fused Lasso for multiple change-point detection

    Get PDF
    We present the group fused Lasso for detection of multiple change-points shared by a set of co-occurring one-dimensional signals. Change-points are detected by approximating the original signals with a constraint on the multidimensional total variation, leading to piecewise-constant approximations. Fast algorithms are proposed to solve the resulting optimization problems, either exactly or approximately. Conditions are given for consistency of both algorithms as the number of signals increases, and empirical evidence is provided to support the results on simulated and array comparative genomic hybridization data

    Forward stagewise regression and the monotone lasso

    Full text link
    We consider the least angle regression and forward stagewise algorithms for solving penalized least squares regression problems. In Efron, Hastie, Johnstone & Tibshirani (2004) it is proved that the least angle regression algorithm, with a small modification, solves the lasso regression problem. Here we give an analogous result for incremental forward stagewise regression, showing that it solves a version of the lasso problem that enforces monotonicity. One consequence of this is as follows: while lasso makes optimal progress in terms of reducing the residual sum-of-squares per unit increase in L1L_1-norm of the coefficient β\beta, forward stage-wise is optimal per unit L1L_1 arc-length traveled along the coefficient path. We also study a condition under which the coefficient paths of the lasso are monotone, and hence the different algorithms coincide. Finally, we compare the lasso and forward stagewise procedures in a simulation study involving a large number of correlated predictors.Comment: Published at http://dx.doi.org/10.1214/07-EJS004 in the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Improved variable selection with Forward-Lasso adaptive shrinkage

    Full text link
    Recently, considerable interest has focused on variable selection methods in regression situations where the number of predictors, pp, is large relative to the number of observations, nn. Two commonly applied variable selection approaches are the Lasso, which computes highly shrunk regression coefficients, and Forward Selection, which uses no shrinkage. We propose a new approach, "Forward-Lasso Adaptive SHrinkage" (FLASH), which includes the Lasso and Forward Selection as special cases, and can be used in both the linear regression and the Generalized Linear Model domains. As with the Lasso and Forward Selection, FLASH iteratively adds one variable to the model in a hierarchical fashion but, unlike these methods, at each step adjusts the level of shrinkage so as to optimize the selection of the next variable. We first present FLASH in the linear regression setting and show that it can be fitted using a variant of the computationally efficient LARS algorithm. Then, we extend FLASH to the GLM domain and demonstrate, through numerous simulations and real world data sets, as well as some theoretical analysis, that FLASH generally outperforms many competing approaches.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS375 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore