250 research outputs found

    One-step sparse estimates in nonconcave penalized likelihood models

    Full text link
    Fan and Li propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this article, we propose a new unified algorithm based on the local linear approximation (LLA) for maximizing the penalized likelihood for a broad class of concave penalty functions. Convergence and other theoretical properties of the LLA algorithm are established. A distinguished feature of the LLA algorithm is that at each LLA step, the LLA estimator can naturally adopt a sparse representation. Thus, we suggest using the one-step LLA estimator from the LLA algorithm as the final estimates. Statistically, we show that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators. Computationally, the one-step LLA estimation methods dramatically reduce the computational cost in maximizing the nonconcave penalized likelihood. We conduct some Monte Carlo simulation to assess the finite sample performance of the one-step sparse estimation methods. The results are very encouraging.Comment: This paper discussed in: [arXiv:0808.1013], [arXiv:0808.1016], [arXiv:0808.1025]. Rejoinder in [arXiv:0808.1030]. Published in at http://dx.doi.org/10.1214/009053607000000802 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Data Filtering for Cluster Analysis by â„“0\ell_0-Norm Regularization

    Full text link
    A data filtering method for cluster analysis is proposed, based on minimizing a least squares function with a weighted â„“0\ell_0-norm penalty. To overcome the discontinuity of the objective function, smooth non-convex functions are employed to approximate the â„“0\ell_0-norm. The convergence of the global minimum points of the approximating problems towards global minimum points of the original problem is stated. The proposed method also exploits a suitable technique to choose the penalty parameter. Numerical results on synthetic and real data sets are finally provided, showing how some existing clustering methods can take advantages from the proposed filtering strategy.Comment: Optimization Letters (2017

    Estimation and variable selection for generalized additive partial linear models

    Get PDF
    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration.Comment: Published in at http://dx.doi.org/10.1214/11-AOS885 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Inference and Mixture Modeling with the Elliptical Gamma Distribution

    Full text link
    We study modeling and inference with the Elliptical Gamma Distribution (EGD). We consider maximum likelihood (ML) estimation for EGD scatter matrices, a task for which we develop new fixed-point algorithms. Our algorithms are efficient and converge to global optima despite nonconvexity. Moreover, they turn out to be much faster than both a well-known iterative algorithm of Kent & Tyler (1991) and sophisticated manifold optimization algorithms. Subsequently, we invoke our ML algorithms as subroutines for estimating parameters of a mixture of EGDs. We illustrate our methods by applying them to model natural image statistics---the proposed EGD mixture model yields the most parsimonious model among several competing approaches.Comment: 23 pages, 11 figure

    Projection-Free Methods for Solving Nonconvex-Concave Saddle Point Problems

    Full text link
    In this paper, we investigate a class of constrained saddle point (SP) problems where the objective function is nonconvex-concave and smooth. This class of problems has wide applicability in machine learning, including robust multi-class classification and dictionary learning. Several projection-based primal-dual methods have been developed for tackling this problem; however, the availability of methods with projection-free oracles remains limited. To address this gap, we propose efficient single-loop projection-free methods reliant on first-order information. In particular, using regularization and nested approximation techniques, we propose a primal-dual conditional gradient method that solely employs linear minimization oracles to handle constraints. Assuming that the constraint set in the maximization is strongly convex, our method achieves an ϵ\epsilon-stationary solution within O(ϵ−6)\mathcal{O}(\epsilon^{-6}) iterations. When the projection onto the constraint set of maximization is easy to compute, we propose a one-sided projection-free method that achieves an ϵ\epsilon-stationary solution within O(ϵ−4)\mathcal{O}(\epsilon^{-4}) iterations. Moreover, we present improved iteration complexities of our methods under a strong concavity assumption. To the best of our knowledge, our proposed algorithms are among the first projection-free methods with convergence guarantees for solving nonconvex-concave SP problems.Comment: Additional experiments have been adde
    • …
    corecore