16 research outputs found

    Privacy preserving distributed learning classifiers-Sequential learning with small sets of data

    No full text
    Background: Artificial intelligence (AI) typically requires a significant amount of high-quality data to build reliable models, where gathering enough data within a single institution can be particularly challenging. In this study we investigated the impact of using sequential learning to exploit very small, siloed sets of clinical and imaging data to train AI models. Furthermore, we evaluated the capacity of such models to achieve equivalent performance when compared to models trained with the same data over a single centralized database. Methods: We propose a privacy preserving distributed learning framework, learning sequentially from each dataset. The framework is applied to three machine learning algorithms: Logistic Regression, Support Vector Machines (SVM), and Perceptron. The models were evaluated using four open-source datasets (Breast cancer, Indian liver, NSCLC-Radiomics dataset, and Stage III NSCLC). Findings: The proposed framework ensured a comparable predictive performance against a centralized learning approach. Pairwise DeLong tests showed no significant difference between the compared pairs for each dataset. Interpretation: Distributed learning contributes to preserve medical data privacy. We foresee this technology will increase the number of collaborative opportunities to develop robust AI, becoming the default solution in scenarios where collecting enough data from a single reliable source is logistically impossible. Distributed sequential learning provides privacy persevering means for institutions with small but clinically valuable datasets to collaboratively train predictive AI while preserving the privacy of their patients. Such models perform similarly to models that are built on a larger central dataset

    A review in radiomics: Making personalized medicine a reality via routine imaging

    No full text
    Radiomics is the quantitative analysis of standard-of-care medical imaging; the information obtained can be applied within clinical decision support systems to create diagnostic, prognostic, and/or predictive models. Radiomics analysis can be performed by extracting hand-crafted radiomics features or via deep learning algorithms. Radiomics has evolved tremendously in the last decade, becoming a bridge between imaging and precision medicine. Radiomics exploits sophisticated image analysis tools coupled with statistical elaboration to extract the wealth of information hidden inside medical images, such as computed tomography (CT), magnetic resonance (MR), and/or Positron emission tomography (PET) scans, routinely performed in the everyday clinical practice. Many efforts have been devoted in recent years to the standardization and validation of radiomics approaches, to demonstrate their usefulness and robustness beyond any reasonable doubts. However, the booming of publications and commercial applications of radiomics approaches warrant caution and proper understanding of all the factors involved to avoid "scientific pollution" and overly enthusiastic claims by researchers and clinicians alike. For these reasons the present review aims to be a guidebook of sorts, describing the process of radiomics, its pitfalls, challenges, and opportunities, along with its ability to improve clinical decision-making, from oncology and respiratory medicine to pharmacological and genotyping studies

    Deep learning for the fully automated segmentation of the inner ear on MRI

    Get PDF
    Abstract Segmentation of anatomical structures is valuable in a variety of tasks, including 3D visualization, surgical planning, and quantitative image analysis. Manual segmentation is time-consuming and deals with intra and inter-observer variability. To develop a deep-learning approach for the fully automated segmentation of the inner ear in MRI, a 3D U-net was trained on 944 MRI scans with manually segmented inner ears as reference standard. The model was validated on an independent, multicentric dataset consisting of 177 MRI scans from three different centers. The model was also evaluated on a clinical validation set containing eight MRI scans with severe changes in the morphology of the labyrinth. The 3D U-net model showed precise Dice Similarity Coefficient scores (mean DSC-0.8790) with a high True Positive Rate (91.5%) and low False Discovery Rate and False Negative Rates (14.8% and 8.49% respectively) across images from three different centers. The model proved to perform well with a DSC of 0.8768 on the clinical validation dataset. The proposed auto-segmentation model is equivalent to human readers and is a reliable, consistent, and efficient method for inner ear segmentation, which can be used in a variety of clinical applications such as surgical planning and quantitative image analysis
    corecore