11 research outputs found
Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery
We propose a calibrated multivariate regression method named CMR for fitting
high dimensional multivariate regression models. Compared with existing
methods, CMR calibrates regularization for each regression task with respect to
its noise level so that it simultaneously attains improved finite-sample
performance and tuning insensitiveness. Theoretically, we provide sufficient
conditions under which CMR achieves the optimal rate of convergence in
parameter estimation. Computationally, we propose an efficient smoothed
proximal gradient algorithm with a worst-case numerical rate of convergence
\cO(1/\epsilon), where is a pre-specified accuracy of the
objective function value. We conduct thorough numerical simulations to
illustrate that CMR consistently outperforms other high dimensional
multivariate regression methods. We also apply CMR to solve a brain activity
prediction problem and find that it is as competitive as a handcrafted model
created by human experts. The R package \texttt{camel} implementing the
proposed method is available on the Comprehensive R Archive Network
\url{http://cran.r-project.org/web/packages/camel/}.Comment: Journal of Machine Learning Research, 201
Ultra-high Dimensional Multiple Output Learning With Simultaneous Orthogonal Matching Pursuit: A Sure Screening Approach
We propose a novel application of the Simultaneous Orthogonal Matching
Pursuit (S-OMP) procedure for sparsistant variable selection in ultra-high
dimensional multi-task regression problems. Screening of variables, as
introduced in \cite{fan08sis}, is an efficient and highly scalable way to
remove many irrelevant variables from the set of all variables, while retaining
all the relevant variables. S-OMP can be applied to problems with hundreds of
thousands of variables and once the number of variables is reduced to a
manageable size, a more computationally demanding procedure can be used to
identify the relevant variables for each of the regression outputs. To our
knowledge, this is the first attempt to utilize relatedness of multiple outputs
to perform fast screening of relevant variables. As our main theoretical
contribution, we prove that, asymptotically, S-OMP is guaranteed to reduce an
ultra-high number of variables to below the sample size without losing true
relevant variables. We also provide formal evidence that a modified Bayesian
information criterion (BIC) can be used to efficiently determine the number of
iterations in S-OMP. We further provide empirical evidence on the benefit of
variable selection using multiple regression outputs jointly, as opposed to
performing variable selection for each output separately. The finite sample
performance of S-OMP is demonstrated on extensive simulation studies, and on a
genetic association mapping problem. Adaptive Lasso; Greedy forward
regression; Orthogonal matching pursuit; Multi-output regression; Multi-task
learning; Simultaneous orthogonal matching pursuit; Sure screening; Variable
selectio
Conditioning of Random Block Subdictionaries with Applications to Block-Sparse Recovery and Regression
The linear model, in which a set of observations is assumed to be given by a
linear combination of columns of a matrix, has long been the mainstay of the
statistics and signal processing literature. One particular challenge for
inference under linear models is understanding the conditions on the dictionary
under which reliable inference is possible. This challenge has attracted
renewed attention in recent years since many modern inference problems deal
with the "underdetermined" setting, in which the number of observations is much
smaller than the number of columns in the dictionary. This paper makes several
contributions for this setting when the set of observations is given by a
linear combination of a small number of groups of columns of the dictionary,
termed the "block-sparse" case. First, it specifies conditions on the
dictionary under which most block subdictionaries are well conditioned. This
result is fundamentally different from prior work on block-sparse inference
because (i) it provides conditions that can be explicitly computed in
polynomial time, (ii) the given conditions translate into near-optimal scaling
of the number of columns of the block subdictionaries as a function of the
number of observations for a large class of dictionaries, and (iii) it suggests
that the spectral norm and the quadratic-mean block coherence of the dictionary
(rather than the worst-case coherences) fundamentally limit the scaling of
dimensions of the well-conditioned block subdictionaries. Second, this paper
investigates the problems of block-sparse recovery and block-sparse regression
in underdetermined settings. Near-optimal block-sparse recovery and regression
are possible for certain dictionaries as long as the dictionary satisfies
easily computable conditions and the coefficients describing the linear
combination of groups of columns can be modeled through a mild statistical
prior.Comment: 39 pages, 3 figures. A revised and expanded version of the paper
published in IEEE Transactions on Information Theory (DOI:
10.1109/TIT.2015.2429632); this revision includes corrections in the proofs
of some of the result
Group Lasso with Overlaps: the Latent Group Lasso approach
We study a norm for structured sparsity which leads to sparse linear
predictors whose supports are unions of prede ned overlapping groups of
variables. We call the obtained formulation latent group Lasso, since it is
based on applying the usual group Lasso penalty on a set of latent variables. A
detailed analysis of the norm and its properties is presented and we
characterize conditions under which the set of groups associated with latent
variables are correctly identi ed. We motivate and discuss the delicate choice
of weights associated to each group, and illustrate this approach on simulated
data and on the problem of breast cancer prognosis from gene expression data