9 research outputs found
On Low-rank Trace Regression under General Sampling Distribution
A growing number of modern statistical learning problems involve estimating a
large number of parameters from a (smaller) number of noisy observations. In a
subset of these problems (matrix completion, matrix compressed sensing, and
multi-task learning) the unknown parameters form a high-dimensional matrix B*,
and two popular approaches for the estimation are convex relaxation of
rank-penalized regression or non-convex optimization. It is also known that
these estimators satisfy near optimal error bounds under assumptions on rank,
coherence, or spikiness of the unknown matrix.
In this paper, we introduce a unifying technique for analyzing all of these
problems via both estimators that leads to short proofs for the existing
results as well as new results. Specifically, first we introduce a general
notion of spikiness for B* and consider a general family of estimators and
prove non-asymptotic error bounds for the their estimation error. Our approach
relies on a generic recipe to prove restricted strong convexity for the
sampling operator of the trace regression. Second, and most notably, we prove
similar error bounds when the regularization parameter is chosen via K-fold
cross-validation. This result is significant in that existing theory on
cross-validated estimators do not apply to our setting since our estimators are
not known to satisfy their required notion of stability. Third, we study
applications of our general results to four subproblems of (1) matrix
completion, (2) multi-task learning, (3) compressed sensing with Gaussian
ensembles, and (4) compressed sensing with factored measurements. For (1), (3),
and (4) we recover matching error bounds as those found in the literature, and
for (2) we obtain (to the best of our knowledge) the first such error bound. We
also demonstrate how our frameworks applies to the exact recovery problem in
(3) and (4).Comment: 32 pages, 1 figur
Concentration inequalities for leave-one-out cross validation
In this article we prove that estimator stability is enough to show that
leave-one-out cross validation is a sound procedure, by providing concentration
bounds in a general framework. In particular, we provide concentration bounds
beyond Lipschitz continuity assumptions on the loss or on the estimator. In
order to obtain our results, we rely on random variables with distribution
satisfying the logarithmic Sobolev inequality, providing us a relatively rich
class of distributions. We illustrate our method by considering several
interesting examples, including linear regression, kernel density estimation,
and stabilized / truncated estimators such as stabilized kernel regression
Stability revisited: new generalisation bounds for the Leave-one-Out
The present paper provides a new generic strategy leading to non-asymptotic theoretical guarantees on the Leave-one-Out procedure applied to a broad class of learning algorithms. This strategy relies on two main ingredients: the new notion of stability, and the strong use of moment inequalities. stability extends the ongoing notion of hypothesis stability while remaining weaker than the uniform stability. It leads to new PAC exponential generalisation bounds for Leave-one-Out under mild assumptions. In the literature, such bounds are available only for uniform stable algorithms under boundedness for instance. Our generic strategy is applied to the Ridge regression algorithm as a first step
Recommended from our members
New perspectives in cross-validation
Appealing due to its universality, cross-validation is an ubiquitous tool for model tuning and selection. At its core, cross-validation proposes to split the data (potentially several times), and alternatively use some of the data for fitting a model and the rest for testing the model. This produces a reliable estimate of the risk, although many questions remain concerning how best to compare such estimates across different models. Despite its widespread use, many theoretical problems remain unanswered for cross-validation, particularly in high-dimensional regimes where bias issues are non-negligible. We first provide an asymptotic analysis of the cross-validated risk in relation to the train-test split risk for a large class of estimators under stability conditions. This asymptotic analysis is expressed in the form of a central limit theorem, and allows us to characterize the speed-up of the cross-validation procedure for general parametric M-estimators. In particular, we show that when the loss used for fitting differs from that used for evaluation, k-fold cross-validation may offer a reduction in variance less (or greater) than k. We then turn our attention to the high-dimensional regime (where the number of parameters is comparable to the number of observations). In such a regime, k-fold cross-validation presents asymptotic bias, and hence increasing the number of folds is of interest. We study the extreme case of leave-one-out cross-validation, and show that, for generalized linear models under smoothness conditions, it is a consistent estimate of the risk at the optimal rate. Given the large computational requirements of leave-one-out cross-validation, we finally consider the problem of obtaining a fast approximate version of the leave-one-out cross-validation (ALO) estimator. We propose a general strategy for deriving formulas for such ALO estimators for penalized generalized linear models, and apply it to many common estimators such as the LASSO, SVM, nuclear norm minimization. The performance of such approximations are evaluated on simulated and real datasets