1,092 research outputs found

    Domain Generalisation via Risk Distribution Matching

    Full text link
    We propose a novel approach for domain generalisation (DG) leveraging risk distributions to characterise domains, thereby achieving domain invariance. In our findings, risk distributions effectively highlight differences between training domains and reveal their inherent complexities. In testing, we may observe similar, or potentially intensifying in magnitude, divergences between risk distributions. Hence, we propose a compelling proposition: Minimising the divergences between risk distributions across training domains leads to robust invariance for DG. The key rationale behind this concept is that a model, trained on domain-invariant or stable features, may consistently produce similar risk distributions across various domains. Building upon this idea, we propose Risk Distribution Matching (RDM). Using the maximum mean discrepancy (MMD) distance, RDM aims to minimise the variance of risk distributions across training domains. However, when the number of domains increases, the direct optimisation of variance leads to linear growth in MMD computations, resulting in inefficiency. Instead, we propose an approximation that requires only one MMD computation, by aligning just two distributions: that of the worst-case domain and the aggregated distribution from all domains. Notably, this method empirically outperforms optimising distributional variance while being computationally more efficient. Unlike conventional DG matching algorithms, RDM stands out for its enhanced efficacy by concentrating on scalar risk distributions, sidestepping the pitfalls of high-dimensional challenges seen in feature or gradient matching. Our extensive experiments on standard benchmark datasets demonstrate that RDM shows superior generalisation capability over state-of-the-art DG methods.Comment: Accepted at 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024

    Learning Disentangled Representations in the Imaging Domain

    Full text link
    Disentangled representation learning has been proposed as an approach to learning general representations even in the absence of, or with limited, supervision. A good general representation can be fine-tuned for new target tasks using modest amounts of data, or used directly in unseen domains achieving remarkable performance in the corresponding task. This alleviation of the data and annotation requirements offers tantalising prospects for applications in computer vision and healthcare. In this tutorial paper, we motivate the need for disentangled representations, present key theory, and detail practical building blocks and criteria for learning such representations. We discuss applications in medical imaging and computer vision emphasising choices made in exemplar key works. We conclude by presenting remaining challenges and opportunities.Comment: Submitted. This paper follows a tutorial style but also surveys a considerable (more than 200 citations) number of work

    Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head

    Full text link
    Machine learning models will often fail when deployed in an environment with a data distribution that is different than the training distribution. When multiple environments are available during training, many methods exist that learn representations which are invariant across the different distributions, with the hope that these representations will be transportable to unseen domains. In this work, we present a nonparametric strategy for learning invariant representations based on the recently-proposed Nadaraya-Watson (NW) head. The NW head makes a prediction by comparing the learned representations of the query to the elements of a support set that consists of labeled data. We demonstrate that by manipulating the support set, one can encode different causal assumptions. In particular, restricting the support set to a single environment encourages the model to learn invariant features that do not depend on the environment. We present a causally-motivated setup for our modeling and training strategy and validate on three challenging real-world domain generalization tasks in computer vision.Comment: Accepted to NeurIPS 202

    Improving predictive behavior under distributional shift

    Full text link
    L'hypothèse fondamentale guidant la pratique de l'apprentissage automatique est qu’en phase de test, les données sont \emph{indépendantes et identiquement distribuées} à la distribution d'apprentissage. En pratique, les ensembles d'entraînement sont souvent assez petits pour favoriser le recours à des biais trompeurs. De plus, lorsqu'il est déployé dans le monde réel, un modèle est susceptible de rencontrer des données nouvelles ou anormales. Lorsque cela se produit, nous aimerions que nos modèles communiquent une confiance prédictive réduite. De telles situations, résultant de différentes formes de changement de distribution, sont incluses dans ce que l'on appelle actuellement les situations \emph{hors distribution} (OOD). Dans cette thèse par article, nous discutons des aspects de performance OOD relativement à des changement de distribution sémantique et non sémantique -- ceux-ci correspondent à des instances de détection OOD et à des problèmes de généralisation OOD. Dans le premier article, nous évaluons de manière critique le problème de la détection OOD, en se concentrant sur l’analyse comparative et l'évaluation. Tout en soutenant que la détection OOD est trop vague pour être significative, nous suggérons plutôt de détecter les anomalies sémantiques. Nous montrons que les classificateurs entraînés sur des objectifs auxiliaires auto-supervisés peuvent améliorer la sémanticité dans les représentations de caractéristiques, comme l’indiquent notre meilleure détection des anomalies sémantiques ainsi que notre meilleure généralisation. Dans le deuxième article, nous développons davantage notre discussion sur le double objectif de robustesse au changement de distribution non sémantique et de sensibilité au changement sémantique. Adoptant une perspective de compositionnalité, nous décomposons le changement non sémantique en composants systématiques et non systématiques, la généralisation en distribution et la détection d'anomalies sémantiques formant les tâches correspondant à des compositions complémentaires. Nous montrons au moyen d'évaluations empiriques sur des tâches synthétiques qu'il est possible d'améliorer simultanément les performances sur tous ces aspects de robustesse et d'incertitude. Nous proposons également une méthode simple qui améliore les approches existantes sur nos tâches synthétiques. Dans le troisième et dernier article, nous considérons un scénario de boîte noire en ligne dans lequel non seulement la distribution des données d'entrée conditionnées sur les étiquettes change de l’entraînement au test, mais aussi la distribution marginale des étiquettes. Nous montrons que sous de telles contraintes pratiques, de simples estimations probabilistes en ligne du changement d'étiquette peuvent quand même être une piste prometteuse. Nous terminons par une brève discussion sur les pistes possibles.The fundamental assumption guiding practice in machine learning has been that test-time data is \emph{independent and identically distributed} to the training distribution. In practical use, training sets are often small enough to encourage reliance upon misleading biases. Additionally, when deployed in the real-world, a model is likely to encounter novel or anomalous data. When this happens, we would like our models to communicate reduced predictive confidence. Such situations, arising as a result of different forms of distributional shift, comprise what are currently termed \emph{out-of-distribution} (OOD) settings. In this thesis-by-article, we discuss aspects of OOD performance with regards to semantic and non-semantic distributional shift — these correspond to instances of OOD detection and OOD generalization problems. In the first article, we critically appraise the problem of OOD detection, with regard to benchmarking and evaluation. Arguing that OOD detection is too broad to be meaningful, we suggest detecting semantic anomalies instead. We show that classifiers trained with auxiliary self-supervised objectives can improve semanticity in feature representations, as indicated by improved semantic anomaly detection as well as improved generalization. In the second article, we further develop our discussion of the twin goals of robustness to non-semantic distributional shift and sensitivity to semantic shift. Adopting a perspective of compositionality, we decompose non-semantic shift into systematic and non-systematic components, along with in-distribution generalization and semantic anomaly detection forming the complementary tasks. We show by means of empirical evaluations on synthetic setups that it is possible to improve performance at all these aspects of robustness and uncertainty simultaneously. We also propose a simple method that improves upon existing approaches on our synthetic benchmarks. In the third and final article, we consider an online, black-box scenario in which both the distribution of input data conditioned on labels changes from training to testing, as well as the marginal distribution of labels. We show that under such practical constraints, simple online probabilistic estimates of label-shift can nevertheless be a promising approach. We close with a brief discussion of possible avenues forward
    corecore