Limiting failures of machine learning systems is vital for safety-critical
applications. In order to improve the robustness of machine learning systems,
Distributionally Robust Optimization (DRO) has been proposed as a
generalization of Empirical Risk Minimization (ERM)aiming at addressing this
need. However, its use in deep learning has been severely restricted due to the
relative inefficiency of the optimizers available for DRO in comparison to the
wide-spread variants of Stochastic Gradient Descent (SGD) optimizers for ERM.
We propose SGD with hardness weighted sampling, a principled and efficient
optimization method for DRO in machine learning that is particularly suited in
the context of deep learning. Similar to a hard example mining strategy in
essence and in practice, the proposed algorithm is straightforward to implement
and computationally as efficient as SGD-based optimizers used for deep
learning, requiring minimal overhead computation. In contrast to typical ad hoc
hard mining approaches, and exploiting recent theoretical results in deep
learning optimization, we prove the convergence of our DRO algorithm for
over-parameterized deep learning networks with ReLU activation and finite
number of layers and parameters. Our experiments on brain tumor segmentation in
MRI demonstrate the feasibility and the usefulness of our approach. Using our
hardness weighted sampling leads to a decrease of 2% of the interquartile range
of the Dice scores for the enhanced tumor and the tumor core regions. The code
for the proposed hard weighted sampler will be made publicly available