Deep learning models have shown promising predictive accuracy for time-series
healthcare applications. However, ensuring the robustness of these models is
vital for building trustworthy AI systems. Existing research predominantly
focuses on robustness to synthetic adversarial examples, crafted by adding
imperceptible perturbations to clean input data. However, these synthetic
adversarial examples do not accurately reflect the most challenging real-world
scenarios, especially in the context of healthcare data. Consequently,
robustness to synthetic adversarial examples may not necessarily translate to
robustness against naturally occurring adversarial examples, which is highly
desirable for trustworthy AI. We propose a method to curate datasets comprised
of natural adversarial examples to evaluate model robustness. The method relies
on probabilistic labels obtained from automated weakly-supervised labeling that
combines noisy and cheap-to-obtain labeling heuristics. Based on these labels,
our method adversarially orders the input data and uses this ordering to
construct a sequence of increasingly adversarial datasets. Our evaluation on
six medical case studies and three non-medical case studies demonstrates the
efficacy and statistical validity of our approach to generating naturally
adversarial dataset