11 research outputs found

    Deep generative modelling of the imaged human brain

    Get PDF
    Human-machine symbiosis is a very promising opportunity for the field of neurology given that the interpretation of the imaged human brain is a trivial feat for neither entity. However, before machine learning systems can be used in real world clinical situations, many issues with automated analysis must first be solved. In this thesis I aim to address what I consider the three biggest hurdles to the adoption of automated machine learning interpretative systems. For each issue, I will first elucidate the reader on its importance given the overarching narratives of both neurology and machine learning, and then showcase my proposed solutions to these issues through the use of deep generative models of the imaged human brain. First, I start by addressing what is an uncontroversial and universal sign of intelligence: the ability to extrapolate knowledge to unseen cases. Human neuroradiologists have studied the anatomy of the healthy brain and can therefore, with some success, identify most pathologies present on an imaged brain, even without having ever been previously exposed to them. Current discriminative machine learning systems require vast amounts of labelled data in order to accurately identify diseases. In this first part I provide a generative framework that permits machine learning models to more efficiently leverage unlabelled data for better diagnoses with either none or small amounts of labels. Secondly, I address a major ethical concern in medicine: equitable evaluation of all patients, regardless of demographics or other identifying characteristics. This is, unfortunately, something that even human practitioners fail at, making the matter ever more pressing: unaddressed biases in data will become biases in the models. To address this concern I suggest a framework through which a generative model synthesises demographically counterfactual brain imaging to successfully reduce the proliferation of demographic biases in discriminative models. Finally, I tackle the challenge of spatial anatomical inference, a task at the centre of the field of lesion-deficit mapping, which given brain lesions and associated cognitive deficits attempts to discover the true functional anatomy of the brain. I provide a new Bayesian generative framework and implementation that allows for greatly improved results on this challenge, hopefully, paving part of the road towards a greater and more complete understanding of the human brain

    Deep Variational Lesion-Deficit Mapping

    Full text link
    Causal mapping of the functional organisation of the human brain requires evidence of \textit{necessity} available at adequate scale only from pathological lesions of natural origin. This demands inferential models with sufficient flexibility to capture both the observable distribution of pathological damage and the unobserved distribution of the neural substrate. Current model frameworks -- both mass-univariate and multivariate -- either ignore distributed lesion-deficit relations or do not model them explicitly, relying on featurization incidental to a predictive task. Here we initiate the application of deep generative neural network architectures to the task of lesion-deficit inference, formulating it as the estimation of an expressive hierarchical model of the joint lesion and deficit distributions conditioned on a latent neural substrate. We implement such deep lesion deficit inference with variational convolutional volumetric auto-encoders. We introduce a comprehensive framework for lesion-deficit model comparison, incorporating diverse candidate substrates, forms of substrate interactions, sample sizes, noise corruption, and population heterogeneity. Drawing on 5500 volume images of ischaemic stroke, we show that our model outperforms established methods by a substantial margin across all simulation scenarios, including comparatively small-scale and noisy data regimes. Our analysis justifies the widespread adoption of this approach, for which we provide an open source implementation: https://github.com/guilherme-pombo/vae_lesion_defici

    Ambiguous Medical Image Segmentation using Diffusion Models

    Full text link
    Collective insights from a group of experts have always proven to outperform an individual's best diagnostic for clinical tasks. For the task of medical image segmentation, existing research on AI-based alternatives focuses more on developing models that can imitate the best individual rather than harnessing the power of expert groups. In this paper, we introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights. Our proposed model generates a distribution of segmentation masks by leveraging the inherent stochastic sampling process of diffusion using only minimal additional learning. We demonstrate on three different medical image modalities- CT, ultrasound, and MRI that our model is capable of producing several possible variants while capturing the frequencies of their occurrences. Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks in terms of accuracy while preserving naturally occurring variation. We also propose a new metric to evaluate the diversity as well as the accuracy of segmentation predictions that aligns with the interest of clinical practice of collective insights

    On discovering and learning structure under limited supervision

    Full text link
    Les formes, les surfaces, les événements et les objets (vivants et non vivants) constituent le monde. L'intelligence des agents naturels, tels que les humains, va au-delà de la simple reconnaissance de formes. Nous excellons à construire des représentations et à distiller des connaissances pour comprendre et déduire la structure du monde. Spécifiquement, le développement de telles capacités de raisonnement peut se produire même avec une supervision limitée. D'autre part, malgré son développement phénoménal, les succès majeurs de l'apprentissage automatique, en particulier des modèles d'apprentissage profond, se situent principalement dans les tâches qui ont accès à de grands ensembles de données annotées. Dans cette thèse, nous proposons de nouvelles solutions pour aider à combler cette lacune en permettant aux modèles d'apprentissage automatique d'apprendre la structure et de permettre un raisonnement efficace en présence de tâches faiblement supervisés. Le thème récurrent de la thèse tente de s'articuler autour de la question « Comment un système perceptif peut-il apprendre à organiser des informations sensorielles en connaissances utiles sous une supervision limitée ? » Et il aborde les thèmes de la géométrie, de la composition et des associations dans quatre articles distincts avec des applications à la vision par ordinateur (CV) et à l'apprentissage par renforcement (RL). Notre première contribution ---Pix2Shape---présente une approche basée sur l'analyse par synthèse pour la perception. Pix2Shape exploite des modèles génératifs probabilistes pour apprendre des représentations 3D à partir d'images 2D uniques. Le formalisme qui en résulte nous offre une nouvelle façon de distiller l'information d'une scène ainsi qu'une représentation puissantes des images. Nous y parvenons en augmentant l'apprentissage profond non supervisé avec des biais inductifs basés sur la physique pour décomposer la structure causale des images en géométrie, orientation, pose, réflectance et éclairage. Notre deuxième contribution ---MILe--- aborde les problèmes d'ambiguïté dans les ensembles de données à label unique tels que ImageNet. Il est souvent inapproprié de décrire une image avec un seul label lorsqu'il est composé de plus d'un objet proéminent. Nous montrons que l'intégration d'idées issues de la littérature linguistique cognitive et l'imposition de biais inductifs appropriés aident à distiller de multiples descriptions possibles à l'aide d'ensembles de données aussi faiblement étiquetés. Ensuite, nous passons au paradigme d'apprentissage par renforcement, et considérons un agent interagissant avec son environnement sans signal de récompense. Notre troisième contribution ---HaC--- est une approche non supervisée basée sur la curiosité pour apprendre les associations entre les modalités visuelles et tactiles. Cela aide l'agent à explorer l'environnement de manière autonome et à utiliser davantage ses connaissances pour s'adapter aux tâches en aval. La supervision dense des récompenses n'est pas toujours disponible (ou n'est pas facile à concevoir), dans de tels cas, une exploration efficace est utile pour générer un comportement significatif de manière auto-supervisée. Pour notre contribution finale, nous abordons l'information limitée contenue dans les représentations obtenues par des agents RL non supervisés. Ceci peut avoir un effet néfaste sur la performance des agents lorsque leur perception est basée sur des images de haute dimension. Notre approche a base de modèles combine l'exploration et la planification sans récompense pour affiner efficacement les modèles pré-formés non supervisés, obtenant des résultats comparables à un agent entraîné spécifiquement sur ces tâches. Il s'agit d'une étape vers la création d'agents capables de généraliser rapidement à plusieurs tâches en utilisant uniquement des images comme perception.Shapes, surfaces, events, and objects (living and non-living) constitute the world. The intelligence of natural agents, such as humans is beyond pattern recognition. We excel at building representations and distilling knowledge to understand and infer the structure of the world. Critically, the development of such reasoning capabilities can occur even with limited supervision. On the other hand, despite its phenomenal development, the major successes of machine learning, in particular, deep learning models are primarily in tasks that have access to large annotated datasets. In this dissertation, we propose novel solutions to help address this gap by enabling machine learning models to learn the structure and enable effective reasoning in the presence of weakly supervised settings. The recurring theme of the thesis tries to revolve around the question of "How can a perceptual system learn to organize sensory information into useful knowledge under limited supervision?" And it discusses the themes of geometry, compositions, and associations in four separate articles with applications to computer vision (CV) and reinforcement learning (RL). Our first contribution ---Pix2Shape---presents an analysis-by-synthesis based approach(also referred to as inverse graphics) for perception. Pix2Shape leverages probabilistic generative models to learn 3D-aware representations from single 2D images. The resulting formalism allows us to perform a novel view synthesis of a scene and produce powerful representations of images. We achieve this by augmenting unsupervised learning with physically based inductive biases to decompose a scene structure into geometry, pose, reflectance and lighting. Our Second contribution ---MILe--- addresses the ambiguity issues in single-labeled datasets such as ImageNet. It is often inappropriate to describe an image with a single label when it is composed of more than one prominent object. We show that integrating ideas from Cognitive linguistic literature and imposing appropriate inductive biases helps in distilling multiple possible descriptions using such weakly labeled datasets. Next, moving into the RL setting, we consider an agent interacting with its environment without a reward signal. Our third Contribution ---HaC--- is a curiosity based unsupervised approach to learning associations between visual and tactile modalities. This aids the agent to explore the environment in an analogous self-guided fashion and further use this knowledge to adapt to downstream tasks. In the absence of reward supervision, intrinsic movitivation is useful to generate meaningful behavior in a self-supervised manner. In our final contribution, we address the representation learning bottleneck in unsupervised RL agents that has detrimental effect on the performance on high-dimensional pixel based inputs. Our model-based approach combines reward-free exploration and planning to efficiently fine-tune unsupervised pre-trained models, achieving comparable results to task-specific baselines. This is a step towards building agents that can generalize quickly on more than a single task using image inputs alone

    Reining in the Deep Generative Models

    Get PDF
    Diese Doktorarbeit untersucht die Kontrollierbarkeit generativer Modelle (insbesondere VAEs und GANs), angewandt hauptsächlich auf Bilder. Wir verbessern 1. die Qualität der generierten Bilder durch das Entfernen der willkürlichen Annahme über den Prior, 2. die Performanz der Klassifikation durch das wählen einer passenden Verteilung im latenten Raum und 3., die Inferenzperformanz durch die gleichzeitige Optimierung einer Kostenfunktion für die Generierung und Inferenz. Variationale Autoencoder (VAEs) sind ein sehr nützliches Werkzeug, da sie als Basis für eine Vielzahl von Aufgaben im Bereich „Maschinelles Lernen“ verwendet werden können, wie beispielsweise für teilüberwachtes Lernen, lernen von Repräsentationen, und unüberwachtem Lernen, usw. Die von VAEs generierten Bilder sind meist stark geglättet, was die praktische Anwendung deutlich limitiert. Als Erklärung hierfür dienen zwei Hypothesen: erstens, ein schlechtes Modell der Likelihood and zweitens, einen zu einfachen Prior. Wir untersuchen diese Hypothesen durch das Erstellen eines deterministischen Autoencoders, den wir regularisierten Autoencoder (RAE) nennen, von dem Stichproben gezogen werden können. Diese Ergänzung erlaubt es uns beliebige Prior-Verteilungen im latenten Raum vorzugeben, wodurch wir Hypothese Eins untersuchen. Diese Untersuchung führt zur Schlussfolgerung, dass der Hauptgrund für die verschwommenen Bilder eines VAEs ein schlecht gewähltes Prior Modell ist. Des Weiteren zeigen wir, dass die Kombination generativer (z.B. VAE-Objektiv) und diskriminativer (z.B. Klassifikatoren) Kostenfunktionen die Performanz für beide steigert. Dafür verwenden wir eine spezielle Variante eines RAE zum Erstellen eines Klassifikators, der robust gegen „Adversarial Attacks“ ist. Konditionierte generative Modelle haben das Potential die Animationsindustrie, neben anderer Industrien, zu revolutionieren. Um dies zu erreichen müssen zwei Schlüsselvoraussetzungen erfüllt werden: erstens eine hohe Qualität der generierten Daten (d.h. die Erzeugung von hoch auflösenden Bildern) und zweitens die generierten Daten müssen ihrer Konditionierung folgen (d.h. erzeugte Bilder müssen die durch die Konditionierung festgelegten Eigenschaften erfüllen). Wir verwenden die Pixel-lokalisierte Korrelation zwischen der Konditionierungsvariable und dem generierten Bild, um einen starken Zusammenhang zwischen beiden sicherzustellen. Dadurch erhalten wir präzise Kontrolle über die generierten Daten. Darüber hinaus zeigen wir, dass das Schließen des Generations-Inferenz Kreises (beide gemeinsam trainieren) von latenten Variablenmodellen zur Verbesserung von sowohl der Generierungskomponente als auch der Inferenzkomponente führt. Dies ermöglicht das gemeinsame Trainieren eines generativen Modells und eines Modells für Inferenz in einem einheitlichen Rahmen. Dies ist sowohl im überwachten, als auch im teilüberwachten Lernen, möglich. Mit diesem vorgeschlagenen Ansatz ist es möglich einen robusten Klassifikator zu trainieren, durch die Verwendung der Marginalen Likelihood eines Datenpunktes, der Entfernung der willkürlichen Annahme über den Prior, der Abmilderung der Diskrepanz zwischen Prior- und Posterior-Verteilung, und des Schließens des Generations-Inferenz Kreises. In dieser Arbeit untersuchen wir die Implikationen von jedem dieser Themen in vielfältigen Aufgaben der Bildklassifizierung und Bildgenerierung

    Machine learning applications for seismic processing and interpretation

    Get PDF
    During the past few years, exploration seismology has increasingly made use of machine learning algorithms in several areas including seismic data processing, attribute analysis, and computer aided interpretation. Since machine learning is a data-driven method for problem solving, it is important to adopt data which have good quality with minimal bias. Hidden variables and an appropriate objective function also need to be considered. In this dissertation, I focus my research on adapting machine learning algorithms that have been successfully applied to other scientific analysis problems to seismic interpretation and seismic data processing. Seismic data volumes can be extremely large, containing Gigabytes to Terrabytes of information. Add to these volumes the rich choice of seismic attributes, each of which has its own strengths in expressing geologic patterns, and the problem grows larger still. Seismic interpretation involves picking faults and horizons and identifying geologic features by their geometry, morphology, and amplitude patterns seen on seismic data. For the seismic facies classification task, I tested multiple attributes as input and built an attribute subset that can best differentiate the salt, mass transport deposits (MTDs), and conformal reflector seismic patterns using a suite of attribute selection algorithms. The resulting attribute subset differentiates the three classes with high accuracy and has the benefit of reducing the dimensionality of the data. To maximize the use of unlabeled data as well as labeled data, I provide a workflow for facies classification based on a semi-supervised learning approach. Compared to using only labeled data, I find that the addition of unlabeled data for learning results in higher performance of classification.. In seismic processing, I propose a deep learning approach for random and coherent noise attenuation in the frequency – space domain. I find that the deep ResNet architecture speeds up the process of denoising and improves the accuracy, which efficiently separates the noise from signals. Finally, I review geophysical inversion and machine learning approaches in an aspect of solving inverse problems and show similarities and differences of these approaches in both mathematical formulation and numerical tests

    Artificial Intelligence Tools for Facial Expression Analysis.

    Get PDF
    Inner emotions show visibly upon the human face and are understood as a basic guide to an individual’s inner world. It is, therefore, possible to determine a person’s attitudes and the effects of others’ behaviour on their deeper feelings through examining facial expressions. In real world applications, machines that interact with people need strong facial expression recognition. This recognition is seen to hold advantages for varied applications in affective computing, advanced human-computer interaction, security, stress and depression analysis, robotic systems, and machine learning. This thesis starts by proposing a benchmark of dynamic versus static methods for facial Action Unit (AU) detection. AU activation is a set of local individual facial muscle parts that occur in unison constituting a natural facial expression event. Detecting AUs automatically can provide explicit benefits since it considers both static and dynamic facial features. For this research, AU occurrence activation detection was conducted by extracting features (static and dynamic) of both nominal hand-crafted and deep learning representation from each static image of a video. This confirmed the superior ability of a pretrained model that leaps in performance. Next, temporal modelling was investigated to detect the underlying temporal variation phases using supervised and unsupervised methods from dynamic sequences. During these processes, the importance of stacking dynamic on top of static was discovered in encoding deep features for learning temporal information when combining the spatial and temporal schemes simultaneously. Also, this study found that fusing both temporal and temporal features will give more long term temporal pattern information. Moreover, we hypothesised that using an unsupervised method would enable the leaching of invariant information from dynamic textures. Recently, fresh cutting-edge developments have been created by approaches based on Generative Adversarial Networks (GANs). In the second section of this thesis, we propose a model based on the adoption of an unsupervised DCGAN for the facial features’ extraction and classification to achieve the following: the creation of facial expression images under different arbitrary poses (frontal, multi-view, and in the wild), and the recognition of emotion categories and AUs, in an attempt to resolve the problem of recognising the static seven classes of emotion in the wild. Thorough experimentation with the proposed cross-database performance demonstrates that this approach can improve the generalization results. Additionally, we showed that the features learnt by the DCGAN process are poorly suited to encoding facial expressions when observed under multiple views, or when trained from a limited number of positive examples. Finally, this research focuses on disentangling identity from expression for facial expression recognition. A novel technique was implemented for emotion recognition from a single monocular image. A large-scale dataset (Face vid) was created from facial image videos which were rich in variations and distribution of facial dynamics, appearance, identities, expressions, and 3D poses. This dataset was used to train a DCNN (ResNet) to regress the expression parameters from a 3D Morphable Model jointly with a back-end classifier

    Self-supervised learning for transferable representations

    Get PDF
    Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks

    Bayesian Volumetric Autoregressive generative models for better semisupervised learning

    Get PDF
    Deep generative models are rapidly gaining traction in medical imaging. Nonetheless, most generative architectures struggle to capture the underlying probability distributions of volumetric data, exhibit convergence problems, and offer no robust indices of model uncertainty. By comparison, the autoregressive generative model PixelCNN can be extended to volumetric data with relative ease, it readily attempts to learn the true underlying probability distribution and it still admits a Bayesian reformulation that provides a principled framework for reasoning about model uncertainty. Our contributions in this paper are two fold: first, we extend PixelCNN to work with volumetric brain magnetic resonance imaging data. Second, we show that reformulating this model to approximate a deep Gaussian process yields a measure of uncertainty that improves the performance of semi-supervised learning, in particular classification performance in settings where the proportion of labelled data is low. We quantify this improvement across classification, regression, and semantic segmentation tasks, training and testing on clinical magnetic resonance brain imaging data comprising T1-weighted and diffusion-weighted sequences
    corecore