6,148 research outputs found
Semi-supervised Learning based on Distributionally Robust Optimization
We propose a novel method for semi-supervised learning (SSL) based on
data-driven distributionally robust optimization (DRO) using optimal transport
metrics. Our proposed method enhances generalization error by using the
unlabeled data to restrict the support of the worst case distribution in our
DRO formulation. We enable the implementation of our DRO formulation by
proposing a stochastic gradient descent algorithm which allows to easily
implement the training procedure. We demonstrate that our Semi-supervised DRO
method is able to improve the generalization error over natural supervised
procedures and state-of-the-art SSL estimators. Finally, we include a
discussion on the large sample behavior of the optimal uncertainty region in
the DRO formulation. Our discussion exposes important aspects such as the role
of dimension reduction in SSL
Finite-sample Analysis of M-estimators using Self-concordance
We demonstrate how self-concordance of the loss can be exploited to obtain
asymptotically optimal rates for M-estimators in finite-sample regimes. We
consider two classes of losses: (i) canonically self-concordant losses in the
sense of Nesterov and Nemirovski (1994), i.e., with the third derivative
bounded with the power of the second; (ii) pseudo self-concordant losses,
for which the power is removed, as introduced by Bach (2010). These classes
contain some losses arising in generalized linear models, including logistic
regression; in addition, the second class includes some common pseudo-Huber
losses. Our results consist in establishing the critical sample size sufficient
to reach the asymptotically optimal excess risk for both classes of losses.
Denoting the parameter dimension, and the effective
dimension which takes into account possible model misspecification, we find the
critical sample size to be for canonically
self-concordant losses, and for pseudo
self-concordant losses, where is the problem-dependent local curvature
parameter. In contrast to the existing results, we only impose local
assumptions on the data distribution, assuming that the calibrated design,
i.e., the design scaled with the square root of the second derivative of the
loss, is subgaussian at the best predictor . Moreover, we obtain the
improved bounds on the critical sample size, scaling near-linearly in
, under the extra assumption that the calibrated design
is subgaussian in the Dikin ellipsoid of . Motivated by these
findings, we construct canonically self-concordant analogues of the Huber and
logistic losses with improved statistical properties. Finally, we extend some
of these results to -regularized M-estimators in high dimensions
- …