8,536 research outputs found
Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression
In this paper, we study convex optimization methods for computing the trace
norm regularized least squares estimate in multivariate linear regression. The
so-called factor estimation and selection (FES) method, recently proposed by
Yuan et al. [22], conducts parameter estimation and factor selection
simultaneously and have been shown to enjoy nice properties in both large and
finite samples. To compute the estimates, however, can be very challenging in
practice because of the high dimensionality and the trace norm constraint. In
this paper, we explore a variant of Nesterov's smooth method [20] and interior
point methods for computing the penalized least squares estimate. The
performance of these methods is then compared using a set of randomly generated
instances. We show that the variant of Nesterov's smooth method [20] generally
outperforms the interior point method implemented in SDPT3 version 4.0 (beta)
[19] substantially . Moreover, the former method is much more memory efficient.Comment: 27 page
Sparse Multivariate Factor Regression
We consider the problem of multivariate regression in a setting where the
relevant predictors could be shared among different responses. We propose an
algorithm which decomposes the coefficient matrix into the product of a long
matrix and a wide matrix, with an elastic net penalty on the former and an
penalty on the latter. The first matrix linearly transforms the
predictors to a set of latent factors, and the second one regresses the
responses on these factors. Our algorithm simultaneously performs dimension
reduction and coefficient estimation and automatically estimates the number of
latent factors from the data. Our formulation results in a non-convex
optimization problem, which despite its flexibility to impose effective
low-dimensional structure, is difficult, or even impossible, to solve exactly
in a reasonable time. We specify an optimization algorithm based on alternating
minimization with three different sets of updates to solve this non-convex
problem and provide theoretical results on its convergence and optimality.
Finally, we demonstrate the effectiveness of our algorithm via experiments on
simulated and real data
Penalized Orthogonal Iteration for Sparse Estimation of Generalized Eigenvalue Problem
We propose a new algorithm for sparse estimation of eigenvectors in
generalized eigenvalue problems (GEP). The GEP arises in a number of modern
data-analytic situations and statistical methods, including principal component
analysis (PCA), multiclass linear discriminant analysis (LDA), canonical
correlation analysis (CCA), sufficient dimension reduction (SDR) and invariant
co-ordinate selection. We propose to modify the standard generalized orthogonal
iteration with a sparsity-inducing penalty for the eigenvectors. To achieve
this goal, we generalize the equation-solving step of orthogonal iteration to a
penalized convex optimization problem. The resulting algorithm, called
penalized orthogonal iteration, provides accurate estimation of the true
eigenspace, when it is sparse. Also proposed is a computationally more
efficient alternative, which works well for PCA and LDA problems. Numerical
studies reveal that the proposed algorithms are competitive, and that our
tuning procedure works well. We demonstrate applications of the proposed
algorithm to obtain sparse estimates for PCA, multiclass LDA, CCA and SDR.
Supplementary materials are available online
- …