2,045 research outputs found

    Tailored for Real-World: A Whole Slide Image Classification System Validated on Uncurated Multi-Site Data Emulating the Prospective Pathology Workload.

    Get PDF
    Standard of care diagnostic procedure for suspected skin cancer is microscopic examination of hematoxylin & eosin stained tissue by a pathologist. Areas of high inter-pathologist discordance and rising biopsy rates necessitate higher efficiency and diagnostic reproducibility. We present and validate a deep learning system which classifies digitized dermatopathology slides into 4 categories. The system is developed using 5,070 images from a single lab, and tested on an uncurated set of 13,537 images from 3 test labs, using whole slide scanners manufactured by 3 different vendors. The system\u27s use of deep-learning-based confidence scoring as a criterion to consider the result as accurate yields an accuracy of up to 98%, and makes it adoptable in a real-world setting. Without confidence scoring, the system achieved an accuracy of 78%. We anticipate that our deep learning system will serve as a foundation enabling faster diagnosis of skin cancer, identification of cases for specialist review, and targeted diagnostic classifications

    A review of artificial intelligence in prostate cancer detection on imaging

    Get PDF
    A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care

    Reference tissue normalization of prostate MRI with automatic multi-organ deep learning pelvis segmentation

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica) Universidade de Lisboa, Faculdade de Ciências, 2018Prostate cancer is the most common cancer among male patients and second leading cause of death from cancer in men (excluding non-melanoma skin cancer). Magnetic Resonance Imaging (MRI) is currently becoming the modality of choice for clinical staging of localized prostate cancer. However, MRI lacks intensity quantification which hinders its diagnostic ability. The overall aim of this dissertation is to automate a novel normalization method that can potentially quantify general MR intensities, thus improving the diagnostic ability of MRI. Two Prostate multi-parametric MRI cohorts, of 2012 and 2016, were used in this retrospective study. To improve the diagnostic ability of T2-Weighted MRI, a novel multi-reference tissue normalization method was tested and automated. This method consists of computing the average intensity of the reference tissues and the corresponding normalized reference values to define a look-up-table through interpolation. Since the method requires delineation of multiple reference tissues, an MRI-specific Deep Learning model, Aniso-3DUNET, was trained on manual segmentations and tested to automate this segmentation step. The output of the Deep Learning model, that consisted of automatic segmentations, was validated and used in an automatic normalization approach. The effect of the manual and automatic normalization approaches on diagnostic accuracy of T2-weighted intensities was determined with Receiver Operating Characteristic (ROC) analyses. The Areas Under the Curve (AUC) were compared. The automatic segmentation of multiple reference-tissues was validated with an average DICE score higher than 0.8 in the test phase. Thereafter, the method developed demonstrated that the normalized intensities lead to an improved diagnostic accuracy over raw intensities using the manual approach, with an AUC going from 0.54 (raw) to 0.68 (normalized), and automatic approach, with an AUC going from 0.68 to 0.73. This study demonstrates that multi-reference tissue normalization improves quantification of T2-weighted images and diagnostic accuracy, possibly leading to a decrease in radiologist’s interpretation variability. It is also possible to conclude that this novel T2-weighted MRI normalization method can be automatized, becoming clinically applicable

    Segment Anything Model (SAM) for Radiation Oncology

    Full text link
    In this study, we evaluate the performance of the Segment Anything Model (SAM) model in clinical radiotherapy. We collected real clinical cases from four regions at the Mayo Clinic: prostate, lung, gastrointestinal, and head \& neck, which are typical treatment sites in radiation oncology. For each case, we selected the OARs of concern in radiotherapy planning and compared the Dice and Jaccard outcomes between clinical manual delineation, automatic segmentation using SAM's "segment anything" mode, and automatic segmentation using SAM with box prompt. Our results indicate that SAM performs better in automatic segmentation for the prostate and lung regions, while its performance in the gastrointestinal and head \& neck regions was relatively inferior. When considering the size of the organ and the clarity of its boundary, SAM displays better performance for larger organs with clear boundaries, such as the lung and liver, and worse for smaller organs with unclear boundaries, like the parotid and cochlea. These findings align with the generally accepted variations in difficulty level associated with manual delineation of different organs at different sites in clinical radiotherapy. Given that SAM, a single trained model, could handle the delineation of OARs in four regions, these results also demonstrate SAM's robust generalization capabilities in automatic segmentation for radiotherapy, i.e., achieving delineation of different radiotherapy OARs using a generic automatic segmentation model. SAM's generalization capabilities across different regions make it technically feasible to develop a generic model for automatic segmentation in radiotherapy

    Implementation of a Commercial Deep Learning-Based Auto Segmentation Software in Radiotherapy: Evaluation of Effectiveness and Impact on Workflow

    Get PDF
    Proper delineation of both target volumes and organs at risk is a crucial step in the radiation therapy workflow. This process is normally carried out manually by medical doctors, hence demanding timewise. To improve efficiency, auto-contouring methods have been proposed. We assessed a specific commercial software to investigate its impact on the radiotherapy workflow on four specific disease sites: head and neck, prostate, breast, and rectum. For the present study, we used a commercial deep learning-based auto-segmentation software, namely Limbus Contour (LC), Version 1.5.0 (Limbus AI Inc., Regina, SK, Canada). The software uses deep convolutional neural network models based on a U-net architecture, specific for each structure. Manual and automatic segmentation were compared on disease-specific organs at risk. Contouring time, geometrical performance (volume variation, Dice Similarity Coefficient-DSC, and center of mass shift), and dosimetric impact (DVH differences) were evaluated. With respect to time savings, the maximum advantage was seen in the setting of head and neck cancer with a 65%-time reduction. The average DSC was 0.72. The best agreement was found for lungs. Good results were highlighted for bladder, heart, and femoral heads. The most relevant dosimetric difference was in the rectal cancer case, where the mean volume covered by the 45 Gy isodose was 10.4 cm(3) for manual contouring and 289.4 cm(3) for automatic segmentation. Automatic contouring was able to significantly reduce the time required in the procedure, simplifying the workflow, and reducing interobserver variability. Its implementation was able to improve the radiation therapy workflow in our department

    Fighting the scanner effect in brain MRI segmentation with a progressive level-of-detail network trained on multi-site data

    Full text link
    Many clinical and research studies of the human brain require an accurate structural MRI segmentation. While traditional atlas-based methods can be applied to volumes from any acquisition site, recent deep learning algorithms ensure very high accuracy only when tested on data from the same sites exploited in training (i.e., internal data). The performance degradation experienced on external data (i.e., unseen volumes from unseen sites) is due to the inter-site variabilities in intensity distributions induced by different MR scanner models, acquisition parameters, and unique artefacts. To mitigate this site-dependency, often referred to as the scanner effect, we propose LOD-Brain, a 3D convolutional neural network with progressive levels-of-detail (LOD) able to segment brain data from any site. Coarser network levels are responsible to learn a robust anatomical prior useful for identifying brain structures and their locations, while finer levels refine the model to handle site-specific intensity distributions and anatomical variations. We ensure robustness across sites by training the model on an unprecedented rich dataset aggregating data from open repositories: almost 27,000 T1w volumes from around 160 acquisition sites, at 1.5 - 3T, from a population spanning from 8 to 90 years old. Extensive tests demonstrate that LOD-Brain produces state-of-the-art results, with no significant difference in performance between internal and external sites, and robust to challenging anatomical variations. Its portability opens the way for large scale application across different healthcare institutions, patient populations, and imaging technology manufacturers. Code, model, and demo are available at the project website
    corecore