32 research outputs found

    Deep generative modelling of the imaged human brain

    Get PDF
    Human-machine symbiosis is a very promising opportunity for the field of neurology given that the interpretation of the imaged human brain is a trivial feat for neither entity. However, before machine learning systems can be used in real world clinical situations, many issues with automated analysis must first be solved. In this thesis I aim to address what I consider the three biggest hurdles to the adoption of automated machine learning interpretative systems. For each issue, I will first elucidate the reader on its importance given the overarching narratives of both neurology and machine learning, and then showcase my proposed solutions to these issues through the use of deep generative models of the imaged human brain. First, I start by addressing what is an uncontroversial and universal sign of intelligence: the ability to extrapolate knowledge to unseen cases. Human neuroradiologists have studied the anatomy of the healthy brain and can therefore, with some success, identify most pathologies present on an imaged brain, even without having ever been previously exposed to them. Current discriminative machine learning systems require vast amounts of labelled data in order to accurately identify diseases. In this first part I provide a generative framework that permits machine learning models to more efficiently leverage unlabelled data for better diagnoses with either none or small amounts of labels. Secondly, I address a major ethical concern in medicine: equitable evaluation of all patients, regardless of demographics or other identifying characteristics. This is, unfortunately, something that even human practitioners fail at, making the matter ever more pressing: unaddressed biases in data will become biases in the models. To address this concern I suggest a framework through which a generative model synthesises demographically counterfactual brain imaging to successfully reduce the proliferation of demographic biases in discriminative models. Finally, I tackle the challenge of spatial anatomical inference, a task at the centre of the field of lesion-deficit mapping, which given brain lesions and associated cognitive deficits attempts to discover the true functional anatomy of the brain. I provide a new Bayesian generative framework and implementation that allows for greatly improved results on this challenge, hopefully, paving part of the road towards a greater and more complete understanding of the human brain

    AVATAR - Machine Learning Pipeline Evaluation Using Surrogate Model

    Get PDF
    © 2020, The Author(s). The evaluation of machine learning (ML) pipelines is essential during automatic ML pipeline composition and optimisation. The previous methods such as Bayesian-based and genetic-based optimisation, which are implemented in Auto-Weka, Auto-sklearn and TPOT, evaluate pipelines by executing them. Therefore, the pipeline composition and optimisation of these methods requires a tremendous amount of time that prevents them from exploring complex pipelines to find better predictive models. To further explore this research challenge, we have conducted experiments showing that many of the generated pipelines are invalid, and it is unnecessary to execute them to find out whether they are good pipelines. To address this issue, we propose a novel method to evaluate the validity of ML pipelines using a surrogate model (AVATAR). The AVATAR enables to accelerate automatic ML pipeline composition and optimisation by quickly ignoring invalid pipelines. Our experiments show that the AVATAR is more efficient in evaluating complex pipelines in comparison with the traditional evaluation approaches requiring their execution

    Image classification over unknown and anomalous domains

    Get PDF
    A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting. Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each. While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so. In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks

    Learning from Complex Neuroimaging Datasets

    Get PDF
    Advancements in Magnetic Resonance Imaging (MRI) allowed for the early diagnosis of neurodevelopmental disorders and neurodegenerative diseases. Neuroanatomical abnormalities in the cerebral cortex are often investigated by examining group-level differences of brain morphometric measures extracted from highly-sampled cortical surfaces. However, group-level differences do not allow for individual-level outcome prediction critical for the application to clinical practice. Despite the success of MRI-based deep learning frameworks, critical issues have been identified: (1) extracting accurate and reliable local features from the cortical surface, (2) determining a parsimonious subset of cortical features for correct disease diagnosis, (3) learning directly from a non-Euclidean high-dimensional feature space, (4) improving the robustness of multi-task multi-modal models, and (5) identifying anomalies in imbalanced and heterogeneous settings. This dissertation describes novel methodological contributions to tackle the challenges above. First, I introduce a Laplacian-based method for quantifying local Extra-Axial Cerebrospinal Fluid (EA-CSF) from structural MRI. Next, I describe a deep learning approach for combining local EA-CSF with other morphometric cortical measures for early disease detection. Then, I propose a data-driven approach for extending convolutional learning to non-Euclidean manifolds such as cortical surfaces. I also present a unified framework for robust multi-task learning from imaging and non-imaging information. Finally, I propose a semi-supervised generative approach for the detection of samples from untrained classes in imbalanced and heterogeneous developmental datasets. The proposed methodological contributions are evaluated by applying them to the early detection of Autism Spectrum Disorder (ASD) in the first year of the infant’s life. Also, the aging human brain is examined in the context of studying different stages of Alzheimer’s Disease (AD).Doctor of Philosoph

    Classification of Sound Scenes and Events in Real-World Scenarios with Deep Learning Techniques

    Get PDF
    La clasificación de los eventos sonoros es un campo de la audición por computador que se está volviendo cada vez más interesante debido al gran número de aplicaciones que podrían beneficiarse de esta tecnología. A diferencia de otros campos de la audición por computador relacionados con la recuperación de información musical o el reconocimiento del habla, la clasificación de eventos sonoros tiene una serie de problemas intrínsecos. Estos problemas son la naturaleza polifónica de la mayoría de las grabaciones de sonido ambiental, la diferencia en la naturaleza de cada sonido, la falta de estructura temporal y la adición de ruido de fondo y reverberación en el proceso de grabación. Estos problemas son campos de estudio para la comunidad científica a día de hoy. Sin embargo, cabe señalar que cuando se despliega una solución de audición por computador en entornos reales, pueden surgir una serie de problemas adicionales. Estos problemas son el Reconocimiento de Conjunto Abierto (OSR), el Aprendizaje de Pocos Disparos (FSL) y la consideración del tiempo de ejecución del sistema (baja complejidad). El OSR se define como el problema que aparece cuando un sistema de inteligencia artificial tiene que enfrentarse a una situación desconocida en la que clases no vistas durante la etapa de entrenamiento están presentes en una etapa de inferencia. El FSL corresponde al problema que se produce cuando hay muy pocas muestras disponibles para cada clase considerada. Por último, dado que estos sistemas se despliegan normalmente en dispositivos de borde, hay que tener en cuenta el tiempo de ejecución, ya que cuanto menos tiempo tarde el sistema en dar una respuesta, mejor será la experiencia percibida por los usuarios. Las soluciones basadas en las técnicas de aprendizaje en profundidad para problemas similares en el dominio de la imagen han mostrado resultados prometedores. Las soluciones más difundidas son las que implementan Redes Neuronales Convolucionales (CNN). Por lo tanto, muchos sistemas de audio de última generación proponen convertir las señales de audio en una representación bidimensional que puede ser tratada como una imagen. La generación de mapas internos se realiza a menudo por las capas convolucionales de las CNN. Sin embargo, estas capas tienen una serie de limitaciones que deben ser estudiadas para poder proponer técnicas para mejorar los mapas de características resultantes. Con este fin, se han propuesto novedosas redes que fusionan dos métodos diferentes, como el aprendizaje residual y las técnicas de excitación y compresión. Los resultados muestran una mejora de la precisión del sistema con la adición de un número reducido de parámetros adicionales. Por otra parte, estas soluciones basadas en entradas bidimensionales pueden mostrar un cierto sesgo, ya que la elección de la representación de audio puede ser específica para una tarea concreta. Por lo tanto, se ha realizado un estudio comparativo de diferentes redes residuales alimentadas directamente por la señal de audio en bruto. Estas soluciones se conocen como de extremo a extremo. Si bien se han realizado estudios similares en la literatura en el dominio de la imagen, los resultados sugieren que los bloques residuales de mejor rendimiento para las tareas de visión artificial pueden no ser los mismos que los de la clasificación de audio. En cuanto a los problemas de FSL y OSR, se propone un marco basado en un autoencoder capaz de mitigar ambos problemas juntos. Esta solución es capaz de crear representaciones robustas de estos patrones de audio a partir de sólo unas pocas muestras, al tiempo que es capaz de rechazar las clases de audio no deseadas.The classification of sound events is a field of machine listening that is becoming increasingly interesting due to the large number of applications that could benefit from this technology. Unlike other fields of machine listening related to music information retrieval or speech recognition, sound event classification has a number of intrinsic problems. These problems are the polyphonic nature of most environmental sound recordings, the difference in the nature of each sound, the lack of temporal structure and the addition of background noise and reverberation in the recording process. These problems are fields of study for the scientific community today. However, it should be noted that when a machine listening solution is deployed in real environments, a number of extra problems may arise. These problems are Open-Set Recognition (OSR), Few-Shot Learning (FSL) and consideration of system runtime (low-complexity). OSR is defined as the problem that appears when an artificial intelligence system has to face an unknown situation where classes unseen during the training stage are present at a usage stage. FSL corresponds to the problem that occurs when there are very few samples available for each considered class. Finally, since these systems are normally deployed in edge devices, the consideration of the execution time must be taken into account, as the less time the system takes to give a response, the better the experience perceived by the users. Solutions based on Deep Learning techniques for similar problems in the image domain have shown promising results. The most widespread solutions are those that implement Convolutional Neural Networks (CNNs). Therefore, many state-of-the-art audio systems propose to convert audio signals into a two-dimensional representation that can be treated as an image. The generation of internal maps is often done by the convolutional layers of the CNNs. However, these layers have a series of limitations that must be studied in order to be able to propose techniques for improving the resulting feature maps. To this end, novel networks have been proposed that merge two different methods such as residual learning and squeeze-excitation techniques. The results show an improvement in the accuracy of the system with the addition of few number of extra parameters. On the other hand, these solutions based on two-dimensional inputs can show a certain bias since the choice of audio representation can be specific to a particular task. Therefore, a comparative study of different residual networks directly fed by the raw audio signal has been carried out. These solutions are known as end-to-end. While similar studies have been carried out in the literature in the image domain, the results suggest that the best performing residual blocks for computer vision tasks may not be the same as those for audio classification. Regarding the FSL and OSR problems, an autoencoder-based framework capable of mitigating both problems together is proposed. This solution is capable of creating robust representations of these audio patterns from just a few samples while being able to reject unwanted audio classes

    Digital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques

    Get PDF
    Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings

    Enabling cardiovascular multimodal, high dimensional, integrative analytics

    Get PDF
    While traditionally the understanding of cardiovascular morbidity relied on the acquisition and interpretation of health data, the advances in health technologies has enabled us to collect far larger amount of health data. This thesis explores the application of advanced analytics that utilise powerful mechanisms for integrating health data across different modalities and dimensions into a single and holistic environment to better understand different diseases, with a focus on cardiovascular conditions. Different statistical methodologies are applied across a number of case studies supported by a novel methodology to integrate and simplify data collection. The work culminates in the different dataset modalities explaining different effects on morbidity: blood biomarkers, electrocardiogram recordings, RNA-Seq measurements, and different population effects piece together the understanding of a person morbidity. More specifically, explainable artificial intelligence methods were employed on structured datasets from patients with atrial fibrillation to improve the screening for the disease. Omics datasets, including RNA-sequencing and genotype datasets, were examined and new biomarkers were discovered allowing a better understanding of atrial fibrillation. Electrocardiogram signal data were used to assess the early risk prediction of heart failure, enabling clinicians to use this novel approach to estimate future incidences. Population-level data were applied to the identification of associations and temporal trajectory of diseases to better understand disease dependencies in different clinical cohorts

    Digital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques

    Get PDF
    Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings

    Next-generation energy systems for sustainable smart cities: Roles of transfer learning

    Get PDF
    Smart cities attempt to reach net-zero emissions goals by reducing wasted energy while improving grid stability and meeting service demand. This is possible by adopting next-generation energy systems, which leverage artificial intelligence, the Internet of things (IoT), and communication technologies to collect and analyze big data in real-time and effectively run city services. However, training machine learning algorithms to perform various energy-related tasks in sustainable smart cities is a challenging data science task. These algorithms might not perform as expected, take much time in training, or do not have enough input data to generalize well. To that end, transfer learning (TL) has been proposed as a promising solution to alleviate these issues. To the best of the authors’ knowledge, this paper presents the first review of the applicability of TL for energy systems by adopting a well-defined taxonomy of existing TL frameworks. Next, an in-depth analysis is carried out to identify the pros and cons of current techniques and discuss unsolved issues. Moving on, two case studies illustrating the use of TL for (i) energy prediction with mobility data and (ii) load forecasting in sports facilities are presented. Lastly, the paper ends with a discussion of the future directions

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book
    corecore