37 research outputs found
Federated learning enables big data for rare cancer boundary detection
Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing
DENTEX: An Abnormal Tooth Detection with Dental Enumeration and Diagnosis Benchmark for Panoramic X-rays
Panoramic X-rays are frequently used in dentistry for treatment planning, but
their interpretation can be both time-consuming and prone to error. Artificial
intelligence (AI) has the potential to aid in the analysis of these X-rays,
thereby improving the accuracy of dental diagnoses and treatment plans.
Nevertheless, designing automated algorithms for this purpose poses significant
challenges, mainly due to the scarcity of annotated data and variations in
anatomical structure. To address these issues, the Dental Enumeration and
Diagnosis on Panoramic X-rays Challenge (DENTEX) has been organized in
association with the International Conference on Medical Image Computing and
Computer-Assisted Intervention (MICCAI) in 2023. This challenge aims to promote
the development of algorithms for multi-label detection of abnormal teeth,
using three types of hierarchically annotated data: partially annotated
quadrant data, partially annotated quadrant-enumeration data, and fully
annotated quadrant-enumeration-diagnosis data, inclusive of four different
diagnoses. In this paper, we present the results of evaluating participant
algorithms on the fully annotated data, additionally investigating performance
variation for quadrant, enumeration, and diagnosis labels in the detection of
abnormal teeth. The provision of this annotated dataset, alongside the results
of this challenge, may lay the groundwork for the creation of AI-powered tools
that can offer more precise and efficient diagnosis and treatment planning in
the field of dentistry. The evaluation code and datasets can be accessed at
https://github.com/ibrahimethemhamamci/DENTEXComment: MICCAI 2023 Challeng
Panoptica -- instance-wise evaluation of 3D semantic and instance segmentation maps
This paper introduces panoptica, a versatile and performance-optimized
package designed for computing instance-wise segmentation quality metrics from
2D and 3D segmentation maps. panoptica addresses the limitations of existing
metrics and provides a modular framework that complements the original
intersection over union-based panoptic quality with other metrics, such as the
distance metric Average Symmetric Surface Distance. The package is open-source,
implemented in Python, and accompanied by comprehensive documentation and
tutorials. panoptica employs a three-step metrics computation process to cover
diverse use cases. The efficacy of panoptica is demonstrated on various
real-world biomedical datasets, where an instance-wise evaluation is
instrumental for an accurate representation of the underlying clinical task.
Overall, we envision panoptica as a valuable tool facilitating in-depth
evaluation of segmentation methods.Comment: 15 pages, 6 figures, 3 table
Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ‘‘modality-agnostic training’’ technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach1 obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors
GaNDLF: A Generally Nuanced Deep Learning Framework for Scalable End-to-End Clinical Workflows in Medical Imaging
Deep Learning (DL) has greatly highlighted the potential impact of optimized machine learning in both the scientific and clinical communities. The advent of open-source DL libraries from major industrial entities, such as TensorFlow (Google), PyTorch (Facebook), and MXNet (Apache), further contributes to DL promises on the democratization of computational analytics. However, increased technical and specialized background is required to develop DL algorithms, and the variability of implementation details hinders their reproducibility. Towards lowering the barrier and making the mechanism of DL development, training, and inference more stable, reproducible, and scalable, without requiring an extensive technical background, this manuscript proposes the Generally Nuanced Deep Learning Framework (GaNDLF). With built-in support for k-fold cross-validation, data augmentation, multiple modalities and output classes, and multi-GPU training, as well as the ability to work with both radiographic and histologic imaging, GaNDLF aims to provide an end-to-end solution for all DL-related tasks, to tackle problems in medical imaging and provide a robust application framework for deployment in clinical workflows