72 research outputs found

    Towards Integration of Artificial Intelligence into Medical Devices as a Real-Time Recommender System for Personalised Healthcare:State-of-the-Art and Future Prospects

    Get PDF
    In the era of big data, artificial intelligence (AI) algorithms have the potential to revolutionize healthcare by improving patient outcomes and reducing healthcare costs. AI algorithms have frequently been used in health care for predictive modelling, image analysis and drug discovery. Moreover, as a recommender system, these algorithms have shown promising impacts on personalized healthcare provision. A recommender system learns the behaviour of the user and predicts their current preferences (recommends) based on their previous preferences. Implementing AI as a recommender system improves this prediction accuracy and solves cold start and data sparsity problems. However, most of the methods and algorithms are tested in a simulated setting which cannot recapitulate the influencing factors of the real world. This review article systematically reviews prevailing methodologies in recommender systems and discusses the AI algorithms as recommender systems specifically in the field of healthcare. It also provides discussion around the most cutting-edge academic and practical contributions present in the literature, identifies performance evaluation matrices, challenges in the implementation of AI as a recommender system, and acceptance of AI-based recommender systems by clinicians. The findings of this article direct researchers and professionals to comprehend currently developed recommender systems and the future of medical devices integrated with real-time recommender systems for personalized healthcare

    Medical Image Analysis using Deep Relational Learning

    Full text link
    In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778

    Anwendungen maschinellen Lernens fĂŒr datengetriebene PrĂ€vention auf Populationsebene

    Get PDF
    Healthcare costs are systematically rising, and current therapy-focused healthcare systems are not sustainable in the long run. While disease prevention is a viable instrument for reducing costs and suffering, it requires risk modeling to stratify populations, identify high- risk individuals and enable personalized interventions. In current clinical practice, however, systematic risk stratification is limited: on the one hand, for the vast majority of endpoints, no risk models exist. On the other hand, available models focus on predicting a single disease at a time, rendering predictor collection burdensome. At the same time, the den- sity of individual patient data is constantly increasing. Especially complex data modalities, such as -omics measurements or images, may contain systemic information on future health trajectories relevant for multiple endpoints simultaneously. However, to date, this data is inaccessible for risk modeling as no dedicated methods exist to extract clinically relevant information. This study built on recent advances in machine learning to investigate the ap- plicability of four distinct data modalities not yet leveraged for risk modeling in primary prevention. For each data modality, a neural network-based survival model was developed to extract predictive information, scrutinize performance gains over commonly collected covariates, and pinpoint potential clinical utility. Notably, the developed methodology was able to integrate polygenic risk scores for cardiovascular prevention, outperforming existing approaches and identifying benefiting subpopulations. Investigating NMR metabolomics, the developed methodology allowed the prediction of future disease onset for many common diseases at once, indicating potential applicability as a drop-in replacement for commonly collected covariates. Extending the methodology to phenome-wide risk modeling, elec- tronic health records were found to be a general source of predictive information with high systemic relevance for thousands of endpoints. Assessing retinal fundus photographs, the developed methodology identified diseases where retinal information most impacted health trajectories. In summary, the results demonstrate the capability of neural survival models to integrate complex data modalities for multi-disease risk modeling in primary prevention and illustrate the tremendous potential of machine learning models to disrupt medical practice toward data-driven prevention at population scale.Die Kosten im Gesundheitswesen steigen systematisch und derzeitige therapieorientierte Gesundheitssysteme sind nicht nachhaltig. Angesichts vieler verhinderbarer Krankheiten stellt die PrĂ€vention ein veritables Instrument zur Verringerung von Kosten und Leiden dar. Risikostratifizierung ist die grundlegende Voraussetzung fĂŒr ein prĂ€ventionszentri- ertes Gesundheitswesen um Personen mit hohem Risiko zu identifizieren und Maßnah- men einzuleiten. Heute ist eine systematische Risikostratifizierung jedoch nur begrenzt möglich, da fĂŒr die meisten Krankheiten keine Risikomodelle existieren und sich verfĂŒg- bare Modelle auf einzelne Krankheiten beschrĂ€nken. Weil fĂŒr deren Berechnung jeweils spezielle Sets an PrĂ€diktoren zu erheben sind werden in Praxis oft nur wenige Modelle angewandt. Gleichzeitig versprechen komplexe DatenmodalitĂ€ten, wie Bilder oder -omics- Messungen, systemische Informationen ĂŒber zukĂŒnftige GesundheitsverlĂ€ufe, mit poten- tieller Relevanz fĂŒr viele Endpunkte gleichzeitig. Da es an dedizierten Methoden zur Ex- traktion klinisch relevanter Informationen fehlt, sind diese Daten jedoch fĂŒr die Risikomod- ellierung unzugĂ€nglich, und ihr Potenzial blieb bislang unbewertet. Diese Studie nutzt ma- chinelles Lernen, um die Anwendbarkeit von vier DatenmodalitĂ€ten in der PrimĂ€rprĂ€ven- tion zu untersuchen: polygene Risikoscores fĂŒr die kardiovaskulĂ€re PrĂ€vention, NMR Meta- bolomicsdaten, elektronische Gesundheitsakten und Netzhautfundusfotos. Pro Datenmodal- itĂ€t wurde ein neuronales Risikomodell entwickelt, um relevante Informationen zu extra- hieren, additive Information gegenĂŒber ĂŒblicherweise erfassten Kovariaten zu quantifizieren und den potenziellen klinischen Nutzen der DatenmodalitĂ€t zu ermitteln. Die entwickelte Me-thodik konnte polygene Risikoscores fĂŒr die kardiovaskulĂ€re PrĂ€vention integrieren. Im Falle der NMR-Metabolomik erschloss die entwickelte Methodik wertvolle Informa- tionen ĂŒber den zukĂŒnftigen Ausbruch von Krankheiten. Unter Einsatz einer phĂ€nomen- weiten Risikomodellierung erwiesen sich elektronische Gesundheitsakten als Quelle prĂ€dik- tiver Information mit hoher systemischer Relevanz. Bei der Analyse von Fundusfotografien der Netzhaut wurden Krankheiten identifiziert fĂŒr deren Vorhersage Netzhautinformationen genutzt werden könnten. Zusammengefasst zeigten die Ergebnisse das Potential neuronaler Risikomodelle die medizinische Praxis in Richtung einer datengesteuerten, prĂ€ventionsori- entierten Medizin zu verĂ€ndern

    Semi-automated learning strategies for large-scale segmentation of histology and other big bioimaging stacks and volumes

    Get PDF
    Labelled high-resolution datasets are becoming increasingly common and necessary in different areas of biomedical imaging. Examples include: serial histology and ex-vivo MRI for atlas building, OCT for studying the human brain, and micro X-ray for tissue engineering. Labelling such datasets, typically, requires manual delineation of a very detailed set of regions of interest on a large number of sections or slices. This process is tedious, time-consuming, not reproducible and rather inefficient due to the high similarity of adjacent sections. In this thesis, I explore the potential of a semi-automated slice level segmentation framework and a suggestive region level framework which aim to speed up the segmentation process of big bioimaging datasets. The thesis includes two well validated, published, and widely used novel methods and one algorithm which did not yield an improvement compared to the current state-of the-art. The slice-wise method, SmartInterpol, consists of a probabilistic model for semi-automated segmentation of stacks of 2D images, in which the user manually labels a sparse set of sections (e.g., one every n sections), and lets the algorithm complete the segmentation for other sections automatically. The proposed model integrates in a principled manner two families of segmentation techniques that have been very successful in brain imaging: multi-atlas segmentation and convolutional neural networks. Labelling every structure on a sparse set of slices is not necessarily optimal, therefore I also introduce a region level active learning framework which requires the labeller to annotate one region of interest on one slice at the time. The framework exploits partial annotations, weak supervision, and realistic estimates of class and section-specific annotation effort in order to greatly reduce the time it takes to produce accurate segmentations for large histological datasets. Although both frameworks have been created targeting histological datasets, they have been successfully applied to other big bioimaging datasets, reducing labelling effort by up to 60−70% without compromising accuracy

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF

    Generalizable deep learning based medical image segmentation

    Get PDF
    Deep learning is revolutionizing medical image analysis and interpretation. However, its real-world deployment is often hindered by the poor generalization to unseen domains (new imaging modalities and protocols). This lack of generalization ability is further exacerbated by the scarcity of labeled datasets for training: Data collection and annotation can be prohibitively expensive in terms of labor and costs because label quality heavily dependents on the expertise of radiologists. Additionally, unreliable predictions caused by poor model generalization pose safety risks to clinical downstream applications. To mitigate labeling requirements, we investigate and develop a series of techniques to strengthen the generalization ability and the data efficiency of deep medical image computing models. We further improve model accountability and identify unreliable predictions made on out-of-domain data, by designing probability calibration techniques. In the first and the second part of thesis, we discuss two types of problems for handling unexpected domains: unsupervised domain adaptation and single-source domain generalization. For domain adaptation we present a data-efficient technique that adapts a segmentation model trained on a labeled source domain (e.g., MRI) to an unlabeled target domain (e.g., CT), using a small number of unlabeled training images from the target domain. For domain generalization, we focus on both image reconstruction and segmentation. For image reconstruction, we design a simple and effective domain generalization technique for cross-domain MRI reconstruction, by reusing image representations learned from natural image datasets. For image segmentation, we perform causal analysis of the challenging cross-domain image segmentation problem. Guided by this causal analysis we propose an effective data-augmentation-based generalization technique for single-source domains. The proposed method outperforms existing approaches on a large variety of cross-domain image segmentation scenarios. In the third part of the thesis, we present a novel self-supervised method for learning generic image representations that can be used to analyze unexpected objects of interest. The proposed method is designed together with a novel few-shot image segmentation framework that can segment unseen objects of interest by taking only a few labeled examples as references. Superior flexibility over conventional fully-supervised models is demonstrated by our few-shot framework: it does not require any fine-tuning on novel objects of interest. We further build a publicly available comprehensive evaluation environment for few-shot medical image segmentation. In the fourth part of the thesis, we present a novel probability calibration model. To ensure safety in clinical settings, a deep model is expected to be able to alert human radiologists if it has low confidence, especially when confronted with out-of-domain data. To this end we present a plug-and-play model to calibrate prediction probabilities on out-of-domain data. It aligns the prediction probability in line with the actual accuracy on the test data. We evaluate our method on both artifact-corrupted images and images from an unforeseen MRI scanning protocol. Our method demonstrates improved calibration accuracy compared with the state-of-the-art method. Finally, we summarize the major contributions and limitations of our works. We also suggest future research directions that will benefit from the works in this thesis.Open Acces

    Deep Learning Models For Biomedical Data Analysis

    Get PDF
    The field of biomedical data analysis is a vibrant area of research dedicated to extracting valuable insights from a wide range of biomedical data sources, including biomedical images and genomics data. The emergence of deep learning, an artificial intelligence approach, presents significant prospects for enhancing biomedical data analysis and knowledge discovery. This dissertation focused on exploring innovative deep-learning methods for biomedical image processing and gene data analysis. During the COVID-19 pandemic, biomedical imaging data, including CT scans and chest x-rays, played a pivotal role in identifying COVID-19 cases by categorizing patient chest x-ray outcomes as COVID-19-positive or negative. While supervised deep learning methods have effectively recognized COVID-19 patterns in chest x-ray datasets, the availability of annotated training data remains limited. To address this challenge, the thesis introduced a semi-supervised deep learning model named ssResNet, built upon the Residual Neural Network (ResNet) architecture. The model combines supervised and unsupervised paths, incorporating a weighted supervised loss function to manage data imbalance. The strategies to diminish prediction uncertainty in deep learning models for critical applications like medical image processing is explore. It achieves this through an ensemble deep learning model, integrating bagging deep learning and model calibration techniques. This ensemble model not only boosts biomedical image segmentation accuracy but also reduces prediction uncertainty, as validated on a comprehensive chest x-ray image segmentation dataset. Furthermore, the thesis introduced an ensemble model integrating Proformer and ensemble learning methodologies. This model constructs multiple independent Proformers for predicting gene expression, their predictions are combined through weighted averaging to generate final predictions. Experimental outcomes underscore the efficacy of this ensemble model in enhancing prediction performance across various metrics. In conclusion, this dissertation advances biomedical data analysis by harnessing the potential of deep learning techniques. It devises innovative approaches for processing biomedical images and gene data. By leveraging deep learning\u27s capabilities, this work paves the way for further progress in biomedical data analytics and its applications within clinical contexts. Index Terms- biomedical data analysis, COVID-19, deep learning, ensemble learning, gene data analytics, medical image segmentation, prediction uncertainty, Proformer, Residual Neural Network (ResNet), semi-supervised learning

    Hemodynamic Quantifications By Contrast-Enhanced Ultrasound:From In-Vitro Modelling To Clinical Validation

    Get PDF

    Measurement of treatment response and survival prediction in malignant pleural mesothelioma

    Get PDF
    Malignant pleural mesothelioma (MPM) is a rare cancer of the mesothelial cells of the visceral and parietal pleurae that is heterogeneous in terms of biology, prognosis and response to systemic anti-cancer therapy (SACT). The primary tumour forms an unusual, complex shape which makes survival prediction and response measurement uniquely challenging. Computed tomography (CT) imaging is the bedrock of radiological quantification and response assessment, but it has major limitations that translate into low sensitivity and high inter-observer variation when classifying response using Response Evaluation Classification In Solid Tumours (mRECIST) criteria. Magnetic resonance imaging (MRI) tools have been developed that overcome some of these problems but cost and availability of MRI mean that optimisation of CT and better use for data acquired by this method are important priorities in the short term. In this thesis, I conducted 3 studies focused on, 1) development of a semi-automated volumetric segmentation method for CT based on recently positive studies in MRI, 2) training and external validation of a deep learning artificial intelligence (AI) tool for fully automated volumetric segmentation based on CT data, and, 3) use of non-tumour imaging features available from CT related to altered body composition for development of new prognostic models, which could assist in selection of patients for treatment and improving tolerance to treatment by targeting the systemic consequences of MPM. The aim of Chapter 3 is to develop a semi-automated MPM tumour volume segmentation method that would serve as the ground truth for the training of a fully automated AI algorithm. A semi-automated approach to pleural tumour segmentation has been developed using MRI scans which calculated volumetric measurements from seed points - defined by differential tumour enhancement - placed within a pre-defined volume of pleural tumour. I extrapolated this MRI method using contrast-enhanced CT scans in 23 patients with MPM. Radiodensity values – defined by Hounsfield units (HU) - were calculated for the different thoracic tissues by placing regions of interest (ROI) on visible areas of pleural tumour with similar ROIs placed on other thoracic tissues. Pleural volume contours were drawn on axial CT slices and propagated throughout the volume by linear interpolation using volumetric software (Myrian Intrasense¼ software v2.4.3 (Paris, France)). Seed points based on the radiodensity range of pleural tumour were placed on representative areas of tumour with regions grown. There were similarities in median thoracic tissue HU values: pleural tumour, 52 [IQR 46 to 60] HU; intercostal muscle, 20.4 [IQR 11.9 to 32.3] HU; diaphragm, 40.4 [IQR 26.4 to 56.4] HU and pleural fluid, 11.8 [IQR 8.3 to 17.8] HU. There was also reduced definition between MPM tumour and neighbouring structures. The mean time taken to complete semi-automated volumetric segmentations for the 8 CT scans examined was 25 (SD 7) minutes. The semi-automated CT volumes were larger than the MRI volumes with a mean difference between MRI and CT volumes of -457.6 cm3 (95% limits of agreement -2741 to +1826 cm3). The complex shape of MPM tumour and overlapping thoracic tissue HU values precluded HU threshold-based region growing and meant that semi-automated volumetry using CT was not possible in this thesis. Chapter 4 describes a multicentre retrospective cohort study that developed and validated an automated AI algorithm – termed a deep learning Convolutional Neural Network (CNN) - for volumetric MPM tumour segmentation. Due to the limitations of the semi-automated approach described in Chapter 3, manually annotated tumour volumes were used to train the CNN. The manual segmentation method ensured that all the parietal pleural tumour was included in the respective volumes. Although the manual CT volumes were consistently smaller than semi-automated MRI volumes (average difference between AI and human volumes 74.8 cm3), they were moderately correlated (Pearson’s r=0.524, p=0.0103). There was strong correlation (external validation set r=0.851, p<0.0001) and agreement (external validation set mean AI minus human volume difference of +31 cm3 between human and AI tumour volumes). AI segmentation errors (4/60 external validation set cases) were associated with complex anatomical features. There was agreement between human and AI volumetric responses in 20/30 (67%) cases. There was agreement between AI volumetric and mRECIST classification responses in 16/30 (55%) cases. Overall survival (OS) was shorter in patients with higher AI-defined pre-chemotherapy tumour volumes (HR=2.40, 95% CI 1.07 to 5.41, p=0.0114). Survival prediction in MPM is difficult due to the heterogeneity of the disease. Previous survival prediction models have not included measures of body composition which are prognostic in other solid organ cancers. In Chapter 5, I explore the impact of loss of skeletal muscle and adipose tissue at the level of the third lumbar vertebra (L3) and the loss of skeletal muscle at the fourth thoracic (T4) vertebrae on survival and response to treatment in patients with MPM receiving chemotherapy. Skeletal and adipose muscle areas at L3 and T4 were quantified by manual delineation of relevant muscle and fat groups using ImageJ software (U.S. National Institutes of Health, Bethesda, MD) on pre-chemotherapy and response assessment CT scans, with normalisation for height. Sarcopenia at L3 was not associated with shorter OS at the pre-chemotherapy (HR 1.49, 95% CI 0.95 to 2.52, p=0.077) or response assessment time points (HR 1.48, 95% CI 0.97 to 2.26, p=0.0536). A higher visceral adipose tissue index (VFI) measured at L3 was associated with shorter OS (HR 1.95, 95% CI 1.05 to 3.62, p=0.0067). In multivariate analysis, obesity was associated with improved OS (HR 0.36, 95% CI 0.20 to 0.65, p<0.001) while interval VFI loss (HR 1.81, 95% CI 1.04 to 3.13, p=0.035) was associated with reduced OS. Overall loss of skeletal muscle index at the fourth thoracic vertebra (T4SMI) during treatment was associated with poorer OS (HR 2.79, 95% CI 1.22 to 6.40, p<0.0001). Skeletal muscle index on the ipsilateral side of the tumour at the fourth thoracic vertebra (Ipsilateral T4SMI) loss was also associated with shorter OS (HR 2.91, 95% CI 1.28 to 6.59, p<0.0001). In separate multivariate models, overall T4SMI muscle loss (HR 2.15, 95% CI 102 to 4.54, p=0.045) and ipsilateral T4SMI muscle loss (HR 2.85, 95% CI 1.17 to 6.94, p=0.021) were independent predictors of OS. Response to chemotherapy was not associated with decreasing skeletal muscle or adipose tissue indices
    • 

    corecore