2,075 research outputs found

    Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions

    Full text link
    Accelerating the discovery of novel and more effective therapeutics is an important pharmaceutical problem in which deep learning is playing an increasingly significant role. However, real-world drug discovery tasks are often characterized by a scarcity of labeled data and significant covariate shift\unicode{x2013}\unicode{x2013}a setting that poses a challenge to standard deep learning methods. In this paper, we present Q-SAVI, a probabilistic model able to address these challenges by encoding explicit prior knowledge of the data-generating process into a prior distribution over functions, presenting researchers with a transparent and probabilistically principled way to encode data-driven modeling preferences. Building on a novel, gold-standard bioactivity dataset that facilitates a meaningful comparison of models in an extrapolative regime, we explore different approaches to induce data shift and construct a challenging evaluation setup. We then demonstrate that using Q-SAVI to integrate contextualized prior knowledge of drug-like chemical space into the modeling process affords substantial gains in predictive accuracy and calibration, outperforming a broad range of state-of-the-art self-supervised pre-training and domain adaptation techniques.Comment: Published in the Proceedings of the 40th International Conference on Machine Learning (ICML 2023

    Towards practical and provable domain adaptation

    Get PDF
    One of the most central questions in statistical modeling is how well a model will generalize. Absent strong assumptions we find that this question is difficult to answer in a meaningful way. In this work we seek to increase our understanding of the domain adaptation setting through two different lenses. First, we investigate whether tractably computable and tight generalization bounds on the performance of neural network classifiers exist in the current literature. The tightest bounds we find use a portion of the input data to tighten the gap between measured performance and the calculated bound. We present evaluations of four bounds using this tightening method on classifiers applied to image classification tasks: Two bounds from the literature in addition to two of our own construction. Further, we find that for situations lacking domain overlap, the existing literature lacks the tools to achieve tight, tractably computable bounds for the neural network models which we use. We conclude that a new approach might be needed. In the second part we therefore consider a setting where we change our underlying assumptions to ones which might be more plausible. This setting, based on learning using privileged information, is shown to result in consistent learning. We also show empirical gains over comparable methods when our assumptions are likely to hold, both in terms of performance and sample efficiency. In summary, the work set out herein has been a first step towards a better understanding of domain adaptation and how using data and new assumptions can help us further our knowledge about this topic

    AI alignment and generalization in deep learning

    Full text link
    This thesis covers a number of works in deep learning aimed at understanding and improving generalization abilities of deep neural networks (DNNs). DNNs achieve unrivaled performance in a growing range of tasks and domains, yet their behavior during learning and deployment remains poorly understood. They can also be surprisingly brittle: in-distribution generalization can be a poor predictor of behavior or performance under distributional shifts, which typically cannot be avoided in practice. While these limitations are not unique to DNNs -- and indeed are likely to be challenges facing any AI systems of sufficient complexity -- the prevalence and power of DNNs makes them particularly worthy of study. I frame these challenges within the broader context of "AI Alignment": a nascent field focused on ensuring that AI systems behave in accordance with their user's intentions. While making AI systems more intelligent or capable can help make them more aligned, it is neither necessary nor sufficient for alignment. However, being able to align state-of-the-art AI systems (e.g. DNNs) is of great social importance in order to avoid undesirable and unsafe behavior from advanced AI systems. Without progress in AI Alignment, advanced AI systems might pursue objectives at odds with human survival, posing an existential risk (``x-risk'') to humanity. A core tenet of this thesis is that the achieving high performance on machine learning benchmarks if often a good indicator of AI systems' capabilities, but not their alignment. This is because AI systems often achieve high performance in unexpected ways that reveal the limitations of our performance metrics, and more generally, our techniques for specifying our intentions. Learning about human intentions using DNNs shows some promise, but DNNs are still prone to learning to solve tasks using concepts of "features" very different from those which are salient to humans. Indeed, this is a major source of their poor generalization on out-of-distribution data. By better understanding the successes and failures of DNN generalization and current methods of specifying our intentions, we aim to make progress towards deep-learning based AI systems that are able to understand users' intentions and act accordingly.Cette thèse discute quelques travaux en apprentissage profond visant à comprendre et à améliorer les capacités de généralisation des réseaux de neurones profonds (DNN). Les DNNs atteignent des performances inégalées dans un éventail croissant de tâches et de domaines, mais leur comportement pendant l'apprentissage et le déploiement reste mal compris. Ils peuvent également être étonnamment fragiles: la généralisation dans la distribution peut être un mauvais prédicteur du comportement ou de la performance lors de changements de distribution, ce qui ne peut généralement pas être évité dans la pratique. Bien que ces limitations ne soient pas propres aux DNN - et sont en effet susceptibles de constituer des défis pour tout système d'IA suffisamment complexe - la prévalence et la puissance des DNN les rendent particulièrement dignes d'étude. J'encadre ces défis dans le contexte plus large de «l'alignement de l'IA»: un domaine naissant axé sur la garantie que les systèmes d'IA se comportent conformément aux intentions de leurs utilisateurs. Bien que rendre les systèmes d'IA plus intelligents ou capables puisse aider à les rendre plus alignés, cela n'est ni nécessaire ni suffisant pour l'alignement. Cependant, être capable d'aligner les systèmes d'IA de pointe (par exemple les DNN) est d'une grande importance sociale afin d'éviter les comportements indésirables et dangereux des systèmes d'IA avancés. Sans progrès dans l'alignement de l'IA, les systèmes d'IA avancés pourraient poursuivre des objectifs contraires à la survie humaine, posant un risque existentiel («x-risque») pour l'humanité. L'un des principes fondamentaux de cette thèse est que l'obtention de hautes performances sur les repères d'apprentissage automatique est souvent un bon indicateur des capacités des systèmes d'IA, mais pas de leur alignement. En effet, les systèmes d'IA atteignent souvent des performances élevées de manière inattendue, ce qui révèle les limites de nos mesures de performance et, plus généralement, de nos techniques pour spécifier nos intentions. L'apprentissage des intentions humaines à l'aide des DNN est quelque peu prometteur, mais les DNN sont toujours enclins à apprendre à résoudre des tâches en utilisant des concepts de «caractéristiques» très différents de ceux qui sont saillants pour les humains. En effet, c'est une source majeure de leur mauvaise généralisation sur les données hors distribution. En comprenant mieux les succès et les échecs de la généralisation DNN et les méthodes actuelles de spécification de nos intentions, nous visons à progresser vers des systèmes d'IA basés sur l'apprentissage en profondeur qui sont capables de comprendre les intentions des utilisateurs et d'agir en conséquence

    Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts

    Full text link
    In reliable decision-making systems based on machine learning, models have to be robust to distributional shifts or provide the uncertainty of their predictions. In node-level problems of graph learning, distributional shifts can be especially complex since the samples are interdependent. To evaluate the performance of graph models, it is important to test them on diverse and meaningful distributional shifts. However, most graph benchmarks considering distributional shifts for node-level problems focus mainly on node features, while structural properties are also essential for graph problems. In this work, we propose a general approach for inducing diverse distributional shifts based on graph structure. We use this approach to create data splits according to several structural node properties: popularity, locality, and density. In our experiments, we thoroughly evaluate the proposed distributional shifts and show that they can be quite challenging for existing graph models. We also reveal that simple models often outperform more sophisticated methods on these challenging shifts. Finally, our experiments provide evidence that there is a trade-off between the quality of learned representations for the base classification task under structural distributional shift and the ability to separate the nodes from different distributions using these representations
    corecore