33 research outputs found

    Explainable Deep Learning

    Get PDF
    Il grande successo che il Deep Learning ha ottenuto in ambiti strategici per la nostra società quali l'industria, la difesa, la medicina etc., ha portanto sempre più realtà a investire ed esplorare l'utilizzo di questa tecnologia. Ormai si possono trovare algoritmi di Machine Learning e Deep Learning quasi in ogni ambito della nostra vita. Dai telefoni, agli elettrodomestici intelligenti fino ai veicoli che guidiamo. Quindi si può dire che questa tecnologia pervarsiva è ormai a contatto con le nostre vite e quindi dobbiamo confrontarci con essa. Da questo nasce l’eXplainable Artificial Intelligence o XAI, uno degli ambiti di ricerca che vanno per la maggiore al giorno d'oggi in ambito di Deep Learning e di Intelligenza Artificiale. Il concetto alla base di questo filone di ricerca è quello di rendere e/o progettare i nuovi algoritmi di Deep Learning in modo che siano affidabili, interpretabili e comprensibili all'uomo. Questa necessità è dovuta proprio al fatto che le reti neurali, modello matematico che sta alla base del Deep Learning, agiscono come una scatola nera, rendendo incomprensibile all'uomo il ragionamento interno che compiono per giungere ad una decisione. Dato che stiamo delegando a questi modelli matematici decisioni sempre più importanti, integrandole nei processi più delicati della nostra società quali, ad esempio, la diagnosi medica, la guida autonoma o i processi di legge, è molto importante riuscire a comprendere le motivazioni che portano questi modelli a produrre determinati risultati. Il lavoro presentato in questa tesi consiste proprio nello studio e nella sperimentazione di algoritmi di Deep Learning integrati con tecniche di Intelligenza Artificiale simbolica. Questa integrazione ha un duplice scopo: rendere i modelli più potenti, consentendogli di compiere ragionamenti o vincolandone il comportamento in situazioni complesse, e renderli interpretabili. La tesi affronta due macro argomenti: le spiegazioni ottenute grazie all'integrazione neuro-simbolica e lo sfruttamento delle spiegazione per rendere gli algoritmi di Deep Learning più capaci o intelligenti. Il primo macro argomento si concentra maggiormente sui lavori svolti nello sperimentare l'integrazione di algoritmi simbolici con le reti neurali. Un approccio è stato quelli di creare un sistema per guidare gli addestramenti delle reti stesse in modo da trovare la migliore combinazione di iper-parametri per automatizzare la progettazione stessa di queste reti. Questo è fatto tramite l'integrazione di reti neurali con la Programmazione Logica Probabilistica (PLP) che consente di sfruttare delle regole probabilistiche indotte dal comportamento delle reti durante la fase di addestramento o ereditate dall'esperienza maturata dagli esperti del settore. Queste regole si innescano allo scatenarsi di un problema che il sistema rileva durate l'addestramento della rete. Questo ci consente di ottenere una spiegazione di cosa è stato fatto per migliorare l'addestramento una volta identificato un determinato problema. Un secondo approccio è stato quello di far cooperare sistemi logico-probabilistici con reti neurali per la diagnosi medica da fonti di dati eterogenee. La seconda tematica affrontata in questa tesi tratta lo sfruttamento delle spiegazioni che possiamo ottenere dalle rete neurali. In particolare, queste spiegazioni sono usate per creare moduli di attenzione che aiutano a vincolare o a guidare le reti neurali portandone ad avere prestazioni migliorate. Tutti i lavori sviluppati durante il dottorato e descritti in questa tesi hanno portato alle pubblicazioni elencate nel Capitolo 14.2.The great success that Machine and Deep Learning has achieved in areas that are strategic for our society such as industry, defence, medicine, etc., has led more and more realities to invest and explore the use of this technology. Machine Learning and Deep Learning algorithms and learned models can now be found in almost every area of our lives. From phones to smart home appliances, to the cars we drive. So it can be said that this pervasive technology is now in touch with our lives, and therefore we have to deal with it. This is why eXplainable Artificial Intelligence or XAI was born, one of the research trends that are currently in vogue in the field of Deep Learning and Artificial Intelligence. The idea behind this line of research is to make and/or design the new Deep Learning algorithms so that they are interpretable and comprehensible to humans. This necessity is due precisely to the fact that neural networks, the mathematical model underlying Deep Learning, act like a black box, making the internal reasoning they carry out to reach a decision incomprehensible and untrustable to humans. As we are delegating more and more important decisions to these mathematical models, it is very important to be able to understand the motivations that lead these models to make certain decisions. This is because we have integrated them into the most delicate processes of our society, such as medical diagnosis, autonomous driving or legal processes. The work presented in this thesis consists in studying and testing Deep Learning algorithms integrated with symbolic Artificial Intelligence techniques. This integration has a twofold purpose: to make the models more powerful, enabling them to carry out reasoning or constraining their behaviour in complex situations, and to make them interpretable. The thesis focuses on two macro topics: the explanations obtained through neuro-symbolic integration and the exploitation of explanations to make the Deep Learning algorithms more capable or intelligent. The neuro-symbolic integration was addressed twice, by experimenting with the integration of symbolic algorithms with neural networks. A first approach was to create a system to guide the training of the networks themselves in order to find the best combination of hyper-parameters to automate the design of these networks. This is done by integrating neural networks with Probabilistic Logic Programming (PLP). This integration makes it possible to exploit probabilistic rules tuned by the behaviour of the networks during the training phase or inherited from the experience of experts in the field. These rules are triggered when a problem occurs during network training. This generates an explanation of what was done to improve the training once a particular issue was identified. A second approach was to make probabilistic logic systems cooperate with neural networks for medical diagnosis on heterogeneous data sources. The second topic addressed in this thesis concerns the exploitation of explanations. In particular, the explanations one can obtain from neural networks are used in order to create attention modules that help in constraining and improving the performance of neural networks. All works developed during the PhD and described in this thesis have led to the publications listed in Chapter 14.2

    Biomedical Image Processing and Classification

    Get PDF
    Biomedical image processing is an interdisciplinary field involving a variety of disciplines, e.g., electronics, computer science, physics, mathematics, physiology, and medicine. Several imaging techniques have been developed, providing many approaches to the study of the human body. Biomedical image processing is finding an increasing number of important applications in, for example, the study of the internal structure or function of an organ and the diagnosis or treatment of a disease. If associated with classification methods, it can support the development of computer-aided diagnosis (CAD) systems, which could help medical doctors in refining their clinical picture

    Machine Learning Modelling of Critical Care Patients in the Intensive Care Units

    Get PDF
    The ICU is a fast-paced data-rich environment which treats the most critically ill patients. On average, over 15 % of patients admitted to the ICU amount in mortality. Therefore, machine learning (ML) is paramount to aiding the optimisation and inference of insight in critical care. In addition, the early and accurate evaluation of the severity at the time of admission is significant for physicians. Such evaluations make patient management more effective as they are more likely to predict whose physical conditions may worsen. Moreover, ML techniques could potentially enhance patients' experience in the clinical setting by providing medical alerts and insight into future events occurring during hospitalisation. The need for interpretable models is crucial in the ICU and clinical setting, as it is vital to explain a decision that leads to any course of action related to an individual patient. This thesis primarily focuses on mortality, length of stay forecasting, and AF classification in critical care. We cover multiple outcomes and modelling methods whilst using multiple cohorts throughout the research. However, the analysis conducted throughout the thesis aims to create interpretable models for each modelling objective. In Chapter 3, we investigate three publicly available critical care databases containing multiple modalities of data and a wide range of parameters. We describe the processes and contemplations which must be considered to create actionable data for analysis in the ICU. Furthermore, we compared the three data sources using traditional statistical and ML methods and compared predictive performance. Based on 24 hours of sequential data, we achieved AUC performances of 79.5% for ICU mortality prediction and a prediction error of approximately 1.3 hours for ICU LOS. In Chapter 4, we investigate a sepsis cohort and conduct three sub-studies. Firstly, we investigated sepsis subtypes and compared biomarkers using traditional modelling methods. Next, we compare our approach to commonly and routinely used scoring systems in the ICU, such as APACHE IV and SOFA. Our tailored approach achieved superior performance with pulmonary and abdominal sepsis (AUC 0.74 and 0.71respectivly), displaying distinct individualities amongst the different sepsis groups. Next, we further expand our analysis by comparing ML methods and inference approaches to our baseline model and ICU acuity scores. We further investigate extending analysis to other outcomes of interest (In-hospital/ICU mortality, In-hospital/ICU LOS) to gain a more holistic view of the sepsis derivatives. This research shows that nonlinear models such as RF and GBM commonly outperformICU scoring, methods such as APACHE IV and SOFA and linear methods such as logistic/linear regression. Lastly, we extend our analysis in a multi-task learning framework for model optimisation and improved predictive performance. Our results showed superior performance with pulmonary, abdominal and renal/UTI sepsis (AUC 0.76, 0.77 and 0.73, respectively). Lastly, Chapter 5 investigates the classification of atrial fibrillation (AF) in long-lead ECG waveforms in sepsis patients. We developed a deep neural network to classify AF ECGs from Non-AF ECG cases in conjunction with refining a method to gain insight from the neural network model. We achieved a predictive performance of 0.99 and 0.89 regarding the test and external validation data. The inference from the model was achieved through the use of saliency maps, dimensionality reduction methods and clustering, utilising the automatic features learned by the developed model. We developed visualisations to help support the inference behind the classification of each ECG prediction. Overall, the research displays a wide range of novelties and unique approaches to solving various outcomes of interest in the ICU. In addition, this research demonstrates the implication of ML applicability in the ICU environment by providing insight and inference to diverse tasks regardless of the level of complexity. With further development, the frameworks and approaches outlined in this thesis have the potential to be used in clinical practice as decision-support tools in the ICU, allowing the automated alert and detection of patient classification, amongst others. The results generated in this thesis resulted in journal publications and medical understanding gained from insight available in the developed ML frameworks

    Anwendungen maschinellen Lernens für datengetriebene Prävention auf Populationsebene

    Get PDF
    Healthcare costs are systematically rising, and current therapy-focused healthcare systems are not sustainable in the long run. While disease prevention is a viable instrument for reducing costs and suffering, it requires risk modeling to stratify populations, identify high- risk individuals and enable personalized interventions. In current clinical practice, however, systematic risk stratification is limited: on the one hand, for the vast majority of endpoints, no risk models exist. On the other hand, available models focus on predicting a single disease at a time, rendering predictor collection burdensome. At the same time, the den- sity of individual patient data is constantly increasing. Especially complex data modalities, such as -omics measurements or images, may contain systemic information on future health trajectories relevant for multiple endpoints simultaneously. However, to date, this data is inaccessible for risk modeling as no dedicated methods exist to extract clinically relevant information. This study built on recent advances in machine learning to investigate the ap- plicability of four distinct data modalities not yet leveraged for risk modeling in primary prevention. For each data modality, a neural network-based survival model was developed to extract predictive information, scrutinize performance gains over commonly collected covariates, and pinpoint potential clinical utility. Notably, the developed methodology was able to integrate polygenic risk scores for cardiovascular prevention, outperforming existing approaches and identifying benefiting subpopulations. Investigating NMR metabolomics, the developed methodology allowed the prediction of future disease onset for many common diseases at once, indicating potential applicability as a drop-in replacement for commonly collected covariates. Extending the methodology to phenome-wide risk modeling, elec- tronic health records were found to be a general source of predictive information with high systemic relevance for thousands of endpoints. Assessing retinal fundus photographs, the developed methodology identified diseases where retinal information most impacted health trajectories. In summary, the results demonstrate the capability of neural survival models to integrate complex data modalities for multi-disease risk modeling in primary prevention and illustrate the tremendous potential of machine learning models to disrupt medical practice toward data-driven prevention at population scale.Die Kosten im Gesundheitswesen steigen systematisch und derzeitige therapieorientierte Gesundheitssysteme sind nicht nachhaltig. Angesichts vieler verhinderbarer Krankheiten stellt die Prävention ein veritables Instrument zur Verringerung von Kosten und Leiden dar. Risikostratifizierung ist die grundlegende Voraussetzung für ein präventionszentri- ertes Gesundheitswesen um Personen mit hohem Risiko zu identifizieren und Maßnah- men einzuleiten. Heute ist eine systematische Risikostratifizierung jedoch nur begrenzt möglich, da für die meisten Krankheiten keine Risikomodelle existieren und sich verfüg- bare Modelle auf einzelne Krankheiten beschränken. Weil für deren Berechnung jeweils spezielle Sets an Prädiktoren zu erheben sind werden in Praxis oft nur wenige Modelle angewandt. Gleichzeitig versprechen komplexe Datenmodalitäten, wie Bilder oder -omics- Messungen, systemische Informationen über zukünftige Gesundheitsverläufe, mit poten- tieller Relevanz für viele Endpunkte gleichzeitig. Da es an dedizierten Methoden zur Ex- traktion klinisch relevanter Informationen fehlt, sind diese Daten jedoch für die Risikomod- ellierung unzugänglich, und ihr Potenzial blieb bislang unbewertet. Diese Studie nutzt ma- chinelles Lernen, um die Anwendbarkeit von vier Datenmodalitäten in der Primärpräven- tion zu untersuchen: polygene Risikoscores für die kardiovaskuläre Prävention, NMR Meta- bolomicsdaten, elektronische Gesundheitsakten und Netzhautfundusfotos. Pro Datenmodal- ität wurde ein neuronales Risikomodell entwickelt, um relevante Informationen zu extra- hieren, additive Information gegenüber üblicherweise erfassten Kovariaten zu quantifizieren und den potenziellen klinischen Nutzen der Datenmodalität zu ermitteln. Die entwickelte Me-thodik konnte polygene Risikoscores für die kardiovaskuläre Prävention integrieren. Im Falle der NMR-Metabolomik erschloss die entwickelte Methodik wertvolle Informa- tionen über den zukünftigen Ausbruch von Krankheiten. Unter Einsatz einer phänomen- weiten Risikomodellierung erwiesen sich elektronische Gesundheitsakten als Quelle prädik- tiver Information mit hoher systemischer Relevanz. Bei der Analyse von Fundusfotografien der Netzhaut wurden Krankheiten identifiziert für deren Vorhersage Netzhautinformationen genutzt werden könnten. Zusammengefasst zeigten die Ergebnisse das Potential neuronaler Risikomodelle die medizinische Praxis in Richtung einer datengesteuerten, präventionsori- entierten Medizin zu verändern
    corecore