2,201 research outputs found
A Cluster Elastic Net for Multivariate Regression
We propose a method for estimating coefficients in multivariate regression
when there is a clustering structure to the response variables. The proposed
method includes a fusion penalty, to shrink the difference in fitted values
from responses in the same cluster, and an L1 penalty for simultaneous variable
selection and estimation. The method can be used when the grouping structure of
the response variables is known or unknown. When the clustering structure is
unknown the method will simultaneously estimate the clusters of the response
and the regression coefficients. Theoretical results are presented for the
penalized least squares case, including asymptotic results allowing for p >> n.
We extend our method to the setting where the responses are binomial variables.
We propose a coordinate descent algorithm for both the normal and binomial
likelihood, which can easily be extended to other generalized linear model
(GLM) settings. Simulations and data examples from business operations and
genomics are presented to show the merits of both the least squares and
binomial methods.Comment: 37 Pages, 11 Figure
Simultaneous Variable and Covariance Selection with the Multivariate Spike-and-Slab Lasso
We propose a Bayesian procedure for simultaneous variable and covariance
selection using continuous spike-and-slab priors in multivariate linear
regression models where q possibly correlated responses are regressed onto p
predictors. Rather than relying on a stochastic search through the
high-dimensional model space, we develop an ECM algorithm similar to the EMVS
procedure of Rockova & George (2014) targeting modal estimates of the matrix of
regression coefficients and residual precision matrix. Varying the scale of the
continuous spike densities facilitates dynamic posterior exploration and allows
us to filter out negligible regression coefficients and partial covariances
gradually. Our method is seen to substantially outperform regularization
competitors on simulated data. We demonstrate our method with a re-examination
of data from a recent observational study of the effect of playing high school
football on several later-life cognition, psychological, and socio-economic
outcomes
Regularized Multivariate Regression Models with Skew-\u3cem\u3et\u3c/em\u3e Error Distributions
We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
- …