695 research outputs found
Strong rules for nonconvex penalties and their implications for efficient algorithms in high-dimensional regression
We consider approaches for improving the efficiency of algorithms for fitting
nonconvex penalized regression models such as SCAD and MCP in high dimensions.
In particular, we develop rules for discarding variables during cyclic
coordinate descent. This dimension reduction leads to a substantial improvement
in the speed of these algorithms for high-dimensional problems. The rules we
propose here eliminate a substantial fraction of the variables from the
coordinate descent algorithm. Violations are quite rare, especially in the
locally convex region of the solution path, and furthermore, may be easily
detected and corrected by checking the Karush-Kuhn-Tucker conditions. We extend
these rules to generalized linear models, as well as to other nonconvex
penalties such as the -stabilized Mnet penalty, group MCP, and group
SCAD. We explore three variants of the coordinate decent algorithm that
incorporate these rules and study the efficiency of these algorithms in fitting
models to both simulated data and on real data from a genome-wide association
study
Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors
Penalized regression is an attractive framework for variable selection
problems. Often, variables possess a grouping structure, and the relevant
selection problem is that of selecting groups, not individual variables. The
group lasso has been proposed as a way of extending the ideas of the lasso to
the problem of group selection. Nonconvex penalties such as SCAD and MCP have
been proposed and shown to have several advantages over the lasso; these
penalties may also be extended to the group selection problem, giving rise to
group SCAD and group MCP methods. Here, we describe algorithms for fitting
these models stably and efficiently. In addition, we present simulation results
and real data examples comparing and contrasting the statistical properties of
these methods
The Bernstein Function: A Unifying Framework of Nonconvex Penalization in Sparse Estimation
In this paper we study nonconvex penalization using Bernstein functions.
Since the Bernstein function is concave and nonsmooth at the origin, it can
induce a class of nonconvex functions for high-dimensional sparse estimation
problems. We derive a threshold function based on the Bernstein penalty and
give its mathematical properties in sparsity modeling. We show that a
coordinate descent algorithm is especially appropriate for penalized regression
problems with the Bernstein penalty. Additionally, we prove that the Bernstein
function can be defined as the concave conjugate of a -divergence and
develop a conjugate maximization algorithm for finding the sparse solution.
Finally, we particularly exemplify a family of Bernstein nonconvex penalties
based on a generalized Gamma measure and conduct empirical analysis for this
family
Penalized Estimation of Directed Acyclic Graphs From Discrete Data
Bayesian networks, with structure given by a directed acyclic graph (DAG),
are a popular class of graphical models. However, learning Bayesian networks
from discrete or categorical data is particularly challenging, due to the large
parameter space and the difficulty in searching for a sparse structure. In this
article, we develop a maximum penalized likelihood method to tackle this
problem. Instead of the commonly used multinomial distribution, we model the
conditional distribution of a node given its parents by multi-logit regression,
in which an edge is parameterized by a set of coefficient vectors with dummy
variables encoding the levels of a node. To obtain a sparse DAG, a group norm
penalty is employed, and a blockwise coordinate descent algorithm is developed
to maximize the penalized likelihood subject to the acyclicity constraint of a
DAG. When interventional data are available, our method constructs a causal
network, in which a directed edge represents a causal relation. We apply our
method to various simulated and real data sets. The results show that our
method is very competitive, compared to many existing methods, in DAG
estimation from both interventional and high-dimensional observational data.Comment: To appear in Statistics and Computin
A Selective Review of Group Selection in High-Dimensional Models
Grouping structures arise naturally in many statistical modeling problems.
Several methods have been proposed for variable selection that respect grouping
structure in variables. Examples include the group LASSO and several concave
group selection methods. In this article, we give a selective review of group
selection concerning methodological developments, theoretical properties and
computational algorithms. We pay particular attention to group selection
methods involving concave penalties. We address both group selection and
bi-level selection methods. We describe several applications of these methods
in nonparametric additive models, semiparametric regression, seemingly
unrelated regressions, genomic data analysis and genome wide association
studies. We also highlight some issues that require further study.Comment: Published in at http://dx.doi.org/10.1214/12-STS392 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Learning Large-Scale Bayesian Networks with the sparsebn Package
Learning graphical models from data is an important problem with wide
applications, ranging from genomics to the social sciences. Nowadays datasets
often have upwards of thousands---sometimes tens or hundreds of thousands---of
variables and far fewer samples. To meet this challenge, we have developed a
new R package called sparsebn for learning the structure of large, sparse
graphical models with a focus on Bayesian networks. While there are many
existing software packages for this task, this package focuses on the unique
setting of learning large networks from high-dimensional data, possibly with
interventions. As such, the methods provided place a premium on scalability and
consistency in a high-dimensional setting. Furthermore, in the presence of
interventions, the methods implemented here achieve the goal of learning a
causal network from data. Additionally, the sparsebn package is fully
compatible with existing software packages for network analysis.Comment: To appear in the Journal of Statistical Software, 39 pages, 7 figure
- …