1,175 research outputs found
Discussion: One-step sparse estimates in nonconcave penalized likelihood models
Discussion of ``One-step sparse estimates in nonconcave penalized likelihood
models'' [arXiv:0808.1012]Comment: Published in at http://dx.doi.org/10.1214/07-AOS0316C the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Nonconcave penalized likelihood with a diverging number of parameters
A class of variable selection procedures for parametric models via nonconcave
penalized likelihood was proposed by Fan and Li to simultaneously estimate
parameters and select important variables. They demonstrated that this class of
procedures has an oracle property when the number of parameters is finite.
However, in most model selection problems the number of parameters should be
large and grow with the sample size. In this paper some asymptotic properties
of the nonconcave penalized likelihood are established for situations in which
the number of parameters tends to \infty as the sample size increases.
Under regularity conditions we have established an oracle property and the
asymptotic normality of the penalized likelihood estimators. Furthermore, the
consistency of the sandwich formula of the covariance matrix is demonstrated.
Nonconcave penalized likelihood ratio statistics are discussed, and their
asymptotic distributions under the null hypothesis are obtained by imposing
some mild conditions on the penalty functions
Calibrating nonconvex penalized regression in ultra-high dimension
We investigate high-dimensional nonconvex penalized regression, where the
number of covariates may grow at an exponential rate. Although recent
asymptotic theory established that there exists a local minimum possessing the
oracle property under general conditions, it is still largely an open problem
how to identify the oracle estimator among potentially multiple local minima.
There are two main obstacles: (1) due to the presence of multiple minima, the
solution path is nonunique and is not guaranteed to contain the oracle
estimator; (2) even if a solution path is known to contain the oracle
estimator, the optimal tuning parameter depends on many unknown factors and is
hard to estimate. To address these two challenging issues, we first prove that
an easy-to-calculate calibrated CCCP algorithm produces a consistent solution
path which contains the oracle estimator with probability approaching one.
Furthermore, we propose a high-dimensional BIC criterion and show that it can
be applied to the solution path to select the optimal tuning parameter which
asymptotically identifies the oracle estimator. The theory for a general class
of nonconvex penalties in the ultra-high dimensional setup is established when
the random errors follow the sub-Gaussian distribution. Monte Carlo studies
confirm that the calibrated CCCP algorithm combined with the proposed
high-dimensional BIC has desirable performance in identifying the underlying
sparsity pattern for high-dimensional data analysis.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1159 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Discussion: The Dantzig selector: Statistical estimation when is much larger than
Discussion of ``The Dantzig selector: Statistical estimation when is much
larger than '' [math/0506081]Comment: Published in at http://dx.doi.org/10.1214/009053607000000442 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …