10,174 research outputs found
Nonconcave penalized likelihood with a diverging number of parameters
A class of variable selection procedures for parametric models via nonconcave
penalized likelihood was proposed by Fan and Li to simultaneously estimate
parameters and select important variables. They demonstrated that this class of
procedures has an oracle property when the number of parameters is finite.
However, in most model selection problems the number of parameters should be
large and grow with the sample size. In this paper some asymptotic properties
of the nonconcave penalized likelihood are established for situations in which
the number of parameters tends to \infty as the sample size increases.
Under regularity conditions we have established an oracle property and the
asymptotic normality of the penalized likelihood estimators. Furthermore, the
consistency of the sandwich formula of the covariance matrix is demonstrated.
Nonconcave penalized likelihood ratio statistics are discussed, and their
asymptotic distributions under the null hypothesis are obtained by imposing
some mild conditions on the penalty functions
APPLE: Approximate Path for Penalized Likelihood Estimators
In high-dimensional data analysis, penalized likelihood estimators are shown
to provide superior results in both variable selection and parameter
estimation. A new algorithm, APPLE, is proposed for calculating the Approximate
Path for Penalized Likelihood Estimators. Both the convex penalty (such as
LASSO) and the nonconvex penalty (such as SCAD and MCP) cases are considered.
The APPLE efficiently computes the solution path for the penalized likelihood
estimator using a hybrid of the modified predictor-corrector method and the
coordinate-descent algorithm. APPLE is compared with several well-known
packages via simulation and analysis of two gene expression data sets.Comment: 24 pages, 9 figure
Recovering Velocity Distributions via Penalized Likelihood
Line-of-sight velocity distributions are crucial for unravelling the dynamics
of hot stellar systems. We present a new formalism based on penalized
likelihood for deriving such distributions from kinematical data, and evaluate
the performance of two algorithms that extract N(V) from absorption-line
spectra and from sets of individual velocities. Both algorithms are superior to
existing ones in that the solutions are nearly unbiased even when the data are
so poor that a great deal of smoothing is required. In addition, the
discrete-velocity algorithm is able to remove a known distribution of
measurement errors from the estimate of N(V). The formalism is used to recover
the velocity distribution of stars in five fields near the center of the
globular cluster Omega Centauri.Comment: 18 LATEX pages, 10 Postscript figures, uses AASTEX, epsf.sty.
Submitted to The Astronomical Journal, May 199
Nonparametric maximum likelihood estimation of probability densities by penalty function methods
When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates
Constructing irregular histograms by penalized likelihood
We propose a fully automatic procedure for the construction of irregular histograms. For a given number of bins, the maximum likelihood histogram is known to be the result of a dynamic programming algorithm. To choose the number of bins, we propose two different penalties motivated by recent work in model selection by Castellan [6] and Massart [26]. We give a complete description of the algorithm and a proper tuning of the penalties. Finally, we compare our procedure to other existing proposals for a wide range of different densities and sample sizes. --irregular histogram,density estimation,penalized likelihood,dynamic programming
- …