1,008 research outputs found
Regularization-free estimation in trace regression with symmetric positive semidefinite matrices
Over the past few years, trace regression models have received considerable
attention in the context of matrix completion, quantum state tomography, and
compressed sensing. Estimation of the underlying matrix from
regularization-based approaches promoting low-rankedness, notably nuclear norm
regularization, have enjoyed great popularity. In the present paper, we argue
that such regularization may no longer be necessary if the underlying matrix is
symmetric positive semidefinite (\textsf{spd}) and the design satisfies certain
conditions. In this situation, simple least squares estimation subject to an
\textsf{spd} constraint may perform as well as regularization-based approaches
with a proper choice of the regularization parameter, which entails knowledge
of the noise level and/or tuning. By contrast, constrained least squares
estimation comes without any tuning parameter and may hence be preferred due to
its simplicity
Sufficient Dimension Reduction and Modeling Responses Conditioned on Covariates: An Integrated Approach via Convex Optimization
Given observations of a collection of covariates and responses , sufficient dimension reduction (SDR)
techniques aim to identify a mapping
with such that is independent of . The image
summarizes the relevant information in a potentially large number of covariates
that influence the responses . In many contemporary settings, the number
of responses is also quite large, in addition to a large number of
covariates. This leads to the challenge of fitting a succinctly parameterized
statistical model to , which is a problem that is usually not addressed
in a traditional SDR framework. In this paper, we present a computationally
tractable convex relaxation based estimator for simultaneously (a) identifying
a linear dimension reduction of the covariates that is sufficient with
respect to the responses, and (b) fitting several types of structured
low-dimensional models -- factor models, graphical models, latent-variable
graphical models -- to the conditional distribution of . We analyze the
consistency properties of our estimator in a high-dimensional scaling regime.
We also illustrate the performance of our approach on a newsgroup dataset and
on a dataset consisting of financial asset prices.Comment: 34 pages, 1 figur
Positive Semidefinite Metric Learning with Boosting
The learning of appropriate distance metrics is a critical problem in image
classification and retrieval. In this work, we propose a boosting-based
technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One
of the primary difficulties in learning such a metric is to ensure that the
Mahalanobis matrix remains positive semidefinite. Semidefinite programming is
sometimes used to enforce this constraint, but does not scale well.
\BoostMetric is instead based on a key observation that any positive
semidefinite matrix can be decomposed into a linear positive combination of
trace-one rank-one matrices. \BoostMetric thus uses rank-one positive
semidefinite matrices as weak learners within an efficient and scalable
boosting-based learning process. The resulting method is easy to implement,
does not require tuning, and can accommodate various types of constraints.
Experiments on various datasets show that the proposed algorithm compares
favorably to those state-of-the-art methods in terms of classification accuracy
and running time.Comment: 11 pages, Twenty-Third Annual Conference on Neural Information
Processing Systems (NIPS 2009), Vancouver, Canad
- β¦