1,424 research outputs found

    Artificial Intelligence with Light Supervision: Application to Neuroimaging

    Get PDF
    Recent developments in artificial intelligence research have resulted in tremendous success in computer vision, natural language processing and medical imaging tasks, often reaching human or superhuman performance. In this thesis, I further developed artificial intelligence methods based on convolutional neural networks with a special focus on the automated analysis of brain magnetic resonance imaging scans (MRI). I showed that efficient artificial intelligence systems can be created using only minimal supervision, by reducing the quantity and quality of annotations used for training. I applied those methods to the automated assessment of the burden of enlarged perivascular spaces, brain structural changes that may be related to dementia, stroke, mult

    An unsupervised domain adaptation brain CT segmentation method across image modalities and diseases

    Get PDF
    International audienceComputed tomography (CT) is the primary diagnostic tool for brain diseases. To determine the appropriate treatment plan, it is necessary to ascertain the patient's bleeding volume. Automatic segmentation algorithms for hemorrhagic lesions can significantly improve efficiency and avoid treatment delays. However, for deep supervised learning algorithms, a large amount of labeled training data is usually required, making them difficult to apply clinically. In this study, we propose an unsupervised domain adaptation method that is an unsupervised domain adaptation segmentation model that can be trained across modalities and diseases. We call it AMD-DAS for brain CT hemorrhage segmentation tasks. This circumvents the heavy data labeling task by converting the source domain data (MRI with glioma) to our task's required data (CT with Intraparenchymal hemorrhage (IPH)). Our model implements a two-stage domain adaptation process to achieve this objective. In the first stage, we train a pseudo-CT image synthesis network using the CycleGAN architecture through a matching mechanism and domain adaptation approach. In the second stage, we use the model trained in the first stage to synthesize the pseudo-CT images. We use the pseudo-CT with source domain labels and real CT images to train a domain-adaptation segmentation model. Our method exhibits a better performance than the basic one-stage domain adaptation segmentation method (+11.55 Dice score) and achieves an 86.93 Dice score in the IPH unsupervised segmentation task. Our model can be trained without using a ground-truth label, therefore increasing its application potential. Our implementation is publicly available at https://github.com/GuanghuiFU/AMD-DAS-Brain-CT-Segmentation

    DEVELOPING INTEGRATED MACHINE LEARNING MODELS FOR AUTOMATIC COMPUTER-AIDED DIAGNOSIS IN ISCHEMIC ACUTE STROKE MRI

    Get PDF
    Fast detection and quantification of lesion cores in diffusion weighted images (DWIs) has been highly anticipated in clinical and research communities for planning treatment of acute stroke. The recent emergence of successful machine learning (ML) methods, especially Deep Learning (DL), enables automatic Computer Aided Diagnosis (CAD) of stroke in DWIs. However, the lack of publicly available large-scale data and ML models in clinical acute stroke DWI application are still the bottlenecks. In this work, we established the first large annotated open-source database of 2,888 clinical acute stroke MRIs (Chapter 2) to train and develop ML models for automatic stroke lesion detection and segmentation in clinical acute stroke MRI (Chapter 3). For automatic measurement of infarcted arterial territories, the first digital 3D deformable brain arterial territory atlas was created (Chapter 4). In addition, a fully automatic ML system is created to generate automatic radiological reports (Chapter 5 and 6) for calculation of ASPECTS, prediction and quantification of infarcted arterial and anatomical regions, and estimation of hydrocephalus presented in acute stroke MRI. The complete ML system in this work runs locally in real time with minimal computational requirements. It is publicly available and readily useful for non-expert users

    Advancing probabilistic and causal deep learning in medical image analysis

    Get PDF
    The power and flexibility of deep learning have made it an indispensable tool for tackling modern machine learning problems. However, this flexibility comes at the cost of robustness and interpretability, which can lead to undesirable or even harmful outcomes. Deep learning models often fail to generalise to real-world conditions and produce unforeseen errors that hinder wide adoption in safety-critical critical domains such as healthcare. This thesis presents multiple works that address the reliability problems of deep learning in safety-critical domains by being aware of its vulnerabilities and incorporating more domain knowledge when designing and evaluating our algorithms. We start by showing how close collaboration with domain experts is necessary to achieve good results in a real-world clinical task - the multiclass semantic segmentation of traumatic brain injuries (TBI) lesions in head CT. We continue by proposing an algorithm that models spatially coherent aleatoric uncertainty in segmentation tasks by considering the dependencies between pixels. The lack of proper uncertainty quantification is a robustness issue which is ubiquitous in deep learning. Tackling this issue is of the utmost importance if we want to deploy these systems in the real world. Lastly, we present a general framework for evaluating image counterfactual inference models in the absence of ground-truth counterfactuals. Counterfactuals are extremely useful to reason about models and data and to probe models for explanations or mistakes. As a result, their evaluation is critical for improving the interpretability of deep learning models.Open Acces

    Deep Interpretability Methods for Neuroimaging

    Get PDF
    Brain dynamics are highly complex and yet hold the key to understanding brain function and dysfunction. The dynamics captured by resting-state functional magnetic resonance imaging data are noisy, high-dimensional, and not readily interpretable. The typical approach of reducing this data to low-dimensional features and focusing on the most predictive features comes with strong assumptions and can miss essential aspects of the underlying dynamics. In contrast, introspection of discriminatively trained deep learning models may uncover disorder-relevant elements of the signal at the level of individual time points and spatial locations. Nevertheless, the difficulty of reliable training on high-dimensional but small-sample datasets and the unclear relevance of the resulting predictive markers prevent the widespread use of deep learning in functional neuroimaging. In this dissertation, we address these challenges by proposing a deep learning framework to learn from high-dimensional dynamical data while maintaining stable, ecologically valid interpretations. The developed model is pre-trainable and alleviates the need to collect an enormous amount of neuroimaging samples to achieve optimal training. We also provide a quantitative validation module, Retain and Retrain (RAR), that can objectively verify the higher predictability of the dynamics learned by the model. Results successfully demonstrate that the proposed framework enables learning the fMRI dynamics directly from small data and capturing compact, stable interpretations of features predictive of function and dysfunction. We also comprehensively reviewed deep interpretability literature in the neuroimaging domain. Our analysis reveals the ongoing trend of interpretability practices in neuroimaging studies and identifies the gaps that should be addressed for effective human-machine collaboration in this domain. This dissertation also proposed a post hoc interpretability method, Geometrically Guided Integrated Gradients (GGIG), that leverages geometric properties of the functional space as learned by a deep learning model. With extensive experiments and quantitative validation on MNIST and ImageNet datasets, we demonstrate that GGIG outperforms integrated gradients (IG), which is considered to be a popular interpretability method in the literature. As GGIG is able to identify the contours of the discriminative regions in the input space, GGIG may be useful in various medical imaging tasks where fine-grained localization as an explanation is beneficial

    Artificial intelligence in cancer imaging: Clinical challenges and applications

    Get PDF
    Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care

    Symbiotic deep learning for medical image analysis with applications in real-time diagnosis for fetal ultrasound screening

    Get PDF
    The last hundred years have seen a monumental rise in the power and capability of machines to perform intelligent tasks in the stead of previously human operators. This rise is not expected to slow down any time soon and what this means for society and humanity as a whole remains to be seen. The overwhelming notion is that with the right goals in mind, the growing influence of machines on our every day tasks will enable humanity to give more attention to the truly groundbreaking challenges that we all face together. This will usher in a new age of human machine collaboration in which humans and machines may work side by side to achieve greater heights for all of humanity. Intelligent systems are useful in isolation, but the true benefits of intelligent systems come to the fore in complex systems where the interaction between humans and machines can be made seamless, and it is this goal of symbiosis between human and machine that may democratise complex knowledge, which motivates this thesis. In the recent past, datadriven methods have come to the fore and now represent the state-of-the-art in many different fields. Alongside the shift from rule-based towards data-driven methods we have also seen a shift in how humans interact with these technologies. Human computer interaction is changing in response to data-driven methods and new techniques must be developed to enable the same symbiosis between man and machine for data-driven methods as for previous formula-driven technology. We address five key challenges which need to be overcome for data-driven human-in-the-loop computing to reach maturity. These are (1) the ’Categorisation Challenge’ where we examine existing work and form a taxonomy of the different methods being utilised for data-driven human-in-the-loop computing; (2) the ’Confidence Challenge’, where data-driven methods must communicate interpretable beliefs in how confident their predictions are; (3) the ’Complexity Challenge’ where the aim of reasoned communication becomes increasingly important as the complexity of tasks and methods to solve also increases; (4) the ’Classification Challenge’ in which we look at how complex methods can be separated in order to provide greater reasoning in complex classification tasks; and finally (5) the ’Curation Challenge’ where we challenge the assumptions around bottleneck creation for the development of supervised learning methods.Open Acces

    Type 1 diabetes mellitus and the brain: influence of clinical complications and genetic factors on brain structure and cognitive function

    Get PDF
    Type 1 diabetes mellitus is characterised by absolute insulin deficiency, chronic hyperglycaemia and intermittent hypoglycaemia consequent upon treatment with insulin. Severe hypoglycaemia, defined as hypoglycaemia sufficient to necessitate third party intervention for recovery, commonly complicates insulin therapy and repeated exposure may be detrimental to the brain. Microvascular disease, manifest as retinopathy, neuropathy or nephropathy, frequently complicates diabetes, the risk being related to long-term glucose control and increasing disease duration. Microvascular disease may also affect the cerebral circulation and could potentially compromise brain structure and intellectual performance. Type 1 diabetes commonly develops in childhood before full maturation of the central nervous system and the developing brain may exhibit relative vulnerability to damage as a consequence of exposure to severe hypoglycaemia, or the development of Diabetic Keto-Acidosis, in early childhood. Genetic factors influence the vulnerability of an individual to develop cognitive impairment following pathological processes known to disadvantage the central nervous system. Polymorphism of the Apolipoprotein-E gene has been identified as one such factor and is known to influence the prognosis and cognitive outcomes following a wide variety of cerebral insultsThe studies contained within this Thesis explore the long-term consequences the clinical factors described above on brain structure and the cognitive performance of young adults with Type 1 diabetes mellitus of long duration. The effects of polymorphism of the Apolipoprotein-E gene on the cognitive performance of young adults who have Type 1 diabetes mellitus are also evaluated
    corecore