11,556 research outputs found

    Optimal computational and statistical rates of convergence for sparse nonconvex learning problems

    Full text link
    We provide theoretical analysis of the statistical and computational properties of penalized MM-estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path-following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence for any local solution attained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1238 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Pathwise Coordinate Optimization for Sparse Learning: Algorithm and Theory

    Full text link
    The pathwise coordinate optimization is one of the most important computational frameworks for high dimensional convex and nonconvex sparse learning problems. It differs from the classical coordinate optimization algorithms in three salient features: {\it warm start initialization}, {\it active set updating}, and {\it strong rule for coordinate preselection}. Such a complex algorithmic structure grants superior empirical performance, but also poses significant challenge to theoretical analysis. To tackle this long lasting problem, we develop a new theory showing that these three features play pivotal roles in guaranteeing the outstanding statistical and computational performance of the pathwise coordinate optimization framework. Particularly, we analyze the existing pathwise coordinate optimization algorithms and provide new theoretical insights into them. The obtained insights further motivate the development of several modifications to improve the pathwise coordinate optimization framework, which guarantees linear convergence to a unique sparse local optimum with optimal statistical properties in parameter estimation and support recovery. This is the first result on the computational and statistical guarantees of the pathwise coordinate optimization framework in high dimensions. Thorough numerical experiments are provided to support our theory.Comment: Accepted by the Annals of Statistics, 2016

    CHINA'S RURAL HOUSEHOLD DEMAND FOR FRUIT AND VEGETABLES

    Get PDF
    A two-stage budgeting LES-LA/AIDS system is sued to estimate rural household demand in China with special emphasis on changes in demand for fruit and vegetable commodities across different income groups. The own-price elasticity for food was found to be more elastic than that for clothing, housing, durable goods, and other items. Within the food group, price elasticities range from -1.042 to -0.019. Grain, with an expenditure elasticity of almost unity, is an important staple food for the average rural household. Vegetables are important nonstaple foods relative to fruits. Lower value vegetables are the most price elastic in the vegetable group. Fruits are more price elastic than vegetables, with grapes being the most price elastic. Different income groups share a common demand function.AIDS model, Chinese rural households, Elasticity, Household demand, Household demand, LES model, Two-stage budgeting, Demand and Price Analysis,

    Diffusion Approximations for Online Principal Component Estimation and Global Convergence

    Full text link
    In this paper, we propose to adopt the diffusion approximation tools to study the dynamics of Oja's iteration which is an online stochastic gradient descent method for the principal component analysis. Oja's iteration maintains a running estimate of the true principal component from streaming data and enjoys less temporal and spatial complexities. We show that the Oja's iteration for the top eigenvector generates a continuous-state discrete-time Markov chain over the unit sphere. We characterize the Oja's iteration in three phases using diffusion approximation and weak convergence tools. Our three-phase analysis further provides a finite-sample error bound for the running estimate, which matches the minimax information lower bound for principal component analysis under the additional assumption of bounded samples.Comment: Appeared in NIPS 201
    • …
    corecore