1,567 research outputs found
L0 Sparse Inverse Covariance Estimation
Recently, there has been focus on penalized log-likelihood covariance
estimation for sparse inverse covariance (precision) matrices. The penalty is
responsible for inducing sparsity, and a very common choice is the convex
norm. However, the best estimator performance is not always achieved with this
penalty. The most natural sparsity promoting "norm" is the non-convex
penalty but its lack of convexity has deterred its use in sparse maximum
likelihood estimation. In this paper we consider non-convex penalized
log-likelihood inverse covariance estimation and present a novel cyclic descent
algorithm for its optimization. Convergence to a local minimizer is proved,
which is highly non-trivial, and we demonstrate via simulations the reduced
bias and superior quality of the penalty as compared to the
penalty
Fixed effects selection in the linear mixed-effects model using adaptive ridge procedure for L0 penalty performance
This paper is concerned with the selection of fixed effects along with the
estimation of fixed effects, random effects and variance components in the
linear mixed-effects model. We introduce a selection procedure based on an
adaptive ridge (AR) penalty of the profiled likelihood, where the covariance
matrix of the random effects is Cholesky factorized. This selection procedure
is intended to both low and high-dimensional settings where the number of fixed
effects is allowed to grow exponentially with the total sample size, yielding
technical difficulties due to the non-convex optimization problem induced by L0
penalties. Through extensive simulation studies, the procedure is compared to
the LASSO selection and appears to enjoy the model selection consistency as
well as the estimation consistency
Feature Augmentation via Nonparametrics and Selection (FANS) in High Dimensional Classification
We propose a high dimensional classification method that involves
nonparametric feature augmentation. Knowing that marginal density ratios are
the most powerful univariate classifiers, we use the ratio estimates to
transform the original feature measurements. Subsequently, penalized logistic
regression is invoked, taking as input the newly transformed or augmented
features. This procedure trains models equipped with local complexity and
global simplicity, thereby avoiding the curse of dimensionality while creating
a flexible nonlinear decision boundary. The resulting method is called Feature
Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by
generalizing the Naive Bayes model, writing the log ratio of joint densities as
a linear combination of those of marginal densities. It is related to
generalized additive models, but has better interpretability and computability.
Risk bounds are developed for FANS. In numerical analysis, FANS is compared
with competing methods, so as to provide a guideline on its best application
domain. Real data analysis demonstrates that FANS performs very competitively
on benchmark email spam and gene expression data sets. Moreover, FANS is
implemented by an extremely fast algorithm through parallel computing.Comment: 30 pages, 2 figure
- …