24 research outputs found

    Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm

    Get PDF
    Automatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models

    A self-supervised learning strategy for postoperative brain cavity segmentation simulating resections

    Get PDF
    PURPOSE: Accurate segmentation of brain resection cavities (RCs) aids in postoperative analysis and determining follow-up treatment. Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large annotated datasets for training. Annotation of 3D medical images is time-consuming, requires highly trained raters and may suffer from high inter-rater variability. Self-supervised learning strategies can leverage unlabeled data for training. METHODS: We developed an algorithm to simulate resections from preoperative magnetic resonance images (MRIs). We performed self-supervised training of a 3D CNN for RC segmentation using our simulation method. We curated EPISURG, a dataset comprising 430 postoperative and 268 preoperative MRIs from 430 refractory epilepsy patients who underwent resective neurosurgery. We fine-tuned our model on three small annotated datasets from different institutions and on the annotated images in EPISURG, comprising 20, 33, 19 and 133 subjects. RESULTS: The model trained on data with simulated resections obtained median (interquartile range) Dice score coefficients (DSCs) of 81.7 (16.4), 82.4 (36.4), 74.9 (24.2) and 80.5 (18.7) for each of the four datasets. After fine-tuning, DSCs were 89.2 (13.3), 84.1 (19.8), 80.2 (20.1) and 85.2 (10.8). For comparison, inter-rater agreement between human annotators from our previous study was 84.0 (9.9). CONCLUSION: We present a self-supervised learning strategy for 3D CNNs using simulated RCs to accurately segment real RCs on postoperative MRI. Our method generalizes well to data from different institutions, pathologies and modalities. Source code, segmentation models and the EPISURG dataset are available at https://github.com/fepegar/resseg-ijcars

    Diffusion MRI of the facial-vestibulocochlear nerve complex: a prospective clinical validation study

    Get PDF
    Objectives: Surgical planning of vestibular schwannoma surgery would benefit greatly from a robust method of delineating the facial-vestibulocochlear nerve complex with respect to the tumour. This study aimed to optimise a multi-shell readout-segmented diffusion-weighted imaging (rs-DWI) protocol and develop a novel post-processing pipeline to delineate the facial-vestibulocochlear complex within the skull base region, evaluating its accuracy intraoperatively using neuronavigation and tracked electrophysiological recordings./ Methods: In a prospective study of five healthy volunteers and five patients who underwent vestibular schwannoma surgery, rs-DWI was performed and colour tissue maps (CTM) and probabilistic tractography of the cranial nerves were generated. In patients, the average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD-95) were calculated with reference to the neuroradiologist-approved facial nerve segmentation. The accuracy of patient results was assessed intraoperatively using neuronavigation and tracked electrophysiological recordings./ Results: Using CTM alone, the facial-vestibulocochlear complex of healthy volunteer subjects was visualised on 9/10 sides. CTM were generated in all 5 patients with vestibular schwannoma enabling the facial nerve to be accurately identified preoperatively. The mean ASSD between the annotators’ two segmentations was 1.11 mm (SD 0.40) and the mean HD-95 was 4.62 mm (SD 1.78). The median distance from the nerve segmentation to a positive stimulation point was 1.21 mm (IQR 0.81–3.27 mm) and 2.03 mm (IQR 0.99–3.84 mm) for the two annotators, respectively./ Conclusions: rs-DWI may be used to acquire dMRI data of the cranial nerves within the posterior fossa./ Clinical relevance statement: Readout-segmented diffusion-weighted imaging and colour tissue mapping provide 1–2 mm spatially accurate imaging of the facial-vestibulocochlear nerve complex, enabling accurate preoperative localisation of the facial nerve. This study evaluated the technique in 5 healthy volunteers and 5 patients with vestibular schwannoma./ Key Points: • Readout-segmented diffusion-weighted imaging (rs-DWI) with colour tissue mapping (CTM) visualised the facial-vestibulocochlear nerve complex on 9/10 sides in 5 healthy volunteer subjects./ • Using rs-DWI and CTM, the facial nerve was visualised in all 5 patients with vestibular schwannoma and within 1.21–2.03 mm of the nerve’s true intraoperative location./ • Reproducible results were obtained on different scanners

    Why is the Winner the Best?

    Get PDF
    International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work

    A922 Sequential measurement of 1 hour creatinine clearance (1-CRCL) in critically ill patients at risk of acute kidney injury (AKI)

    Get PDF
    Meeting abstrac

    Why is the winner the best?

    Get PDF
    International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The 'typical' lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work
    corecore