22 research outputs found
Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm
Automatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models
Koos Classification of Vestibular Schwannoma via Image Translation-Based Unsupervised Cross-Modality Domain Adaptation
The Koos grading scale is a classification system for vestibular schwannoma
(VS) used to characterize the tumor and its effects on adjacent brain
structures. The Koos classification captures many of the characteristics of
treatment deci-sions and is often used to determine treatment plans. Although
both contrast-enhanced T1 (ceT1) scanning and high-resolution T2 (hrT2)
scanning can be used for Koos Classification, hrT2 scanning is gaining interest
because of its higher safety and cost-effectiveness. However, in the absence of
annotations for hrT2 scans, deep learning methods often inevitably suffer from
performance deg-radation due to unsupervised learning. If ceT1 scans and their
annotations can be used for unsupervised learning of hrT2 scans, the
performance of Koos classifi-cation using unlabeled hrT2 scans will be greatly
improved. In this regard, we propose an unsupervised cross-modality domain
adaptation method based on im-age translation by transforming annotated ceT1
scans into hrT2 modality and us-ing their annotations to achieve supervised
learning of hrT2 modality. Then, the VS and 7 adjacent brain structures related
to Koos classification in hrT2 scans were segmented. Finally, handcrafted
features are extracted from the segmenta-tion results, and Koos grade is
classified using a random forest classifier. The proposed method received rank
1 on the Koos classification task of the Cross-Modality Domain Adaptation
(crossMoDA 2022) challenge, with Macro-Averaged Mean Absolute Error (MA-MAE) of
0.2148 for the validation set and 0.26 for the test set.Comment: 10 pages, 2 figure
Scribble-based Domain Adaptation via Co-segmentation
Although deep convolutional networks have reached state-of-the-art
performance in many medical image segmentation tasks, they have typically
demonstrated poor generalisation capability. To be able to generalise from one
domain (e.g. one imaging modality) to another, domain adaptation has to be
performed. While supervised methods may lead to good performance, they require
to fully annotate additional data which may not be an option in practice. In
contrast, unsupervised methods don't need additional annotations but are
usually unstable and hard to train. In this work, we propose a novel
weakly-supervised method. Instead of requiring detailed but time-consuming
annotations, scribbles on the target domain are used to perform domain
adaptation. This paper introduces a new formulation of domain adaptation based
on structured learning and co-segmentation. Our method is easy to train, thanks
to the introduction of a regularised loss. The framework is validated on
Vestibular Schwannoma segmentation (T1 to T2 scans). Our proposed method
outperforms unsupervised approaches and achieves comparable performance to a
fully-supervised approach.Comment: Accepted at MICCAI 202
An Unpaired Cross-modality Segmentation Framework Using Data Augmentation and Hybrid Convolutional Networks for Segmenting Vestibular Schwannoma and Cochlea
The crossMoDA challenge aims to automatically segment the vestibular
schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans
by leveraging labeled contrast-enhanced T1 scans. The 2022 edition extends the
segmentation task by including multi-institutional scans. In this work, we
proposed an unpaired cross-modality segmentation framework using data
augmentation and hybrid convolutional networks. Considering heterogeneous
distributions and various image sizes for multi-institutional scans, we apply
the min-max normalization for scaling the intensities of all scans between -1
and 1, and use the voxel size resampling and center cropping to obtain
fixed-size sub-volumes for training. We adopt two data augmentation methods for
effectively learning the semantic information and generating realistic target
domain scans: generative and online data augmentation. For generative data
augmentation, we use CUT and CycleGAN to generate two groups of realistic T2
volumes with different details and appearances for supervised segmentation
training. For online data augmentation, we design a random tumor signal
reducing method for simulating the heterogeneity of VS tumor signals.
Furthermore, we utilize an advanced hybrid convolutional network with
multi-dimensional convolutions to adaptively learn sparse inter-slice
information and dense intra-slice information for accurate volumetric
segmentation of VS tumor and cochlea regions in anisotropic scans. On the
crossMoDA2022 validation dataset, our method produces promising results and
achieves the mean DSC values of 72.47% and 76.48% and ASSD values of 3.42 mm
and 0.53 mm for VS tumor and cochlea regions, respectively.Comment: Accepted by BrainLes MICCAI proceeding
Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea Segmentation via Semi-supervised Learning and Label Fusion
Automatic methods to segment the vestibular schwannoma (VS) tumors and the
cochlea from magnetic resonance imaging (MRI) are critical to VS treatment
planning. Although supervised methods have achieved satisfactory performance in
VS segmentation, they require full annotations by experts, which is laborious
and time-consuming. In this work, we aim to tackle the VS and cochlea
segmentation problem in an unsupervised domain adaptation setting. Our proposed
method leverages both the image-level domain alignment to minimize the domain
divergence and semi-supervised training to further boost the performance.
Furthermore, we propose to fuse the labels predicted from multiple models via
noisy label correction. In the MICCAI 2021 crossMoDA challenge, our results on
the final evaluation leaderboard showed that our proposed method has achieved
promising segmentation performance with mean dice score of 79.9% and 82.5% and
ASSD of 1.29 mm and 0.18 mm for VS tumor and cochlea, respectively. The cochlea
ASSD achieved by our method has outperformed all other competing methods as
well as the supervised nnU-Net.Comment: Accepted by MICCAI 2021 BrainLes Workshop. arXiv admin note:
substantial text overlap with arXiv:2109.0627
CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwannoma and Cochlea Segmentation
Domain Adaptation (DA) has recently raised strong interests in the medical
imaging community. While a large variety of DA techniques has been proposed for
image segmentation, most of these techniques have been validated either on
private datasets or on small publicly available datasets. Moreover, these
datasets mostly addressed single-class problems. To tackle these limitations,
the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in
conjunction with the 24th International Conference on Medical Image Computing
and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large
and multi-class benchmark for unsupervised cross-modality DA. The challenge's
goal is to segment two key brain structures involved in the follow-up and
treatment planning of vestibular schwannoma (VS): the VS and the cochleas.
Currently, the diagnosis and surveillance in patients with VS are performed
using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in
using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore,
we created an unsupervised cross-modality segmentation benchmark. The training
set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105).
The aim was to automatically perform unilateral VS and bilateral cochlea
segmentation on hrT2 as provided in the testing set (N=137). A total of 16
teams submitted their algorithm for the evaluation phase. The level of
performance reached by the top-performing teams is strikingly high (best median
Dice - VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice -
VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an
image-to-image translation approach to transform the source-domain images into
pseudo-target-domain images. A segmentation network was then trained using
these generated images and the manual annotations provided for the source
image.Comment: Submitted to Medical Image Analysi
Integrated navigation and visualisation for skull base surgery
Skull base surgery involves the management of tumours located on the underside of the brain and the base of the skull. Skull base tumours are intricately associated with several critical neurovascular structures making surgery challenging and high risk. Vestibular schwannoma (VS) is a benign nerve sheath tumour arising from one of the vestibular nerves and is the commonest pathology encountered in skull base surgery. The goal of modern VS surgery is maximal tumour removal whilst preserving neurological function and maintaining quality of life but despite advanced neurosurgical techniques, facial nerve paralysis remains a potentially devastating complication of this surgery. This thesis describes the development and integration of various advanced navigation and visualisation techniques to increase the precision and accuracy of skull base surgery. A novel Diffusion Magnetic Resonance Imaging (dMRI) acquisition and processing protocol for imaging the facial nerve in patients with VS was developed to improve delineation of facial nerve preoperatively. An automated Artificial Intelligence (AI)-based framework was developed to segment VS from MRI scans. A user-friendly navigation system capable of integrating dMRI and tractography of the facial nerve, 3D tumour segmentation and intraoperative 3D ultrasound was developed and validated using an anatomically-realistic acoustic phantom model of a head including the skull, brain and VS. The optical properties of five types of human brain tumour (meningioma, pituitary adenoma, schwannoma, low- and high-grade glioma) and nine different types of healthy brain tissue were examined across a wavelength spectrum of 400 nm to 800 nm in order to inform the development of an Intraoperative Hypserpectral Imaging (iHSI) system. Finally, functional and technical requirements of an iHSI were established and a prototype system was developed and tested in a first-in-patient study
Semi-Supervised Segmentation of Radiation-Induced Pulmonary Fibrosis from Lung CT Scans with Multi-Scale Guided Dense Attention
Computed Tomography (CT) plays an important role in monitoring
radiation-induced Pulmonary Fibrosis (PF), where accurate segmentation of the
PF lesions is highly desired for diagnosis and treatment follow-up. However,
the task is challenged by ambiguous boundary, irregular shape, various position
and size of the lesions, as well as the difficulty in acquiring a large set of
annotated volumetric images for training. To overcome these problems, we
propose a novel convolutional neural network called PF-Net and incorporate it
into a semi-supervised learning framework based on Iterative Confidence-based
Refinement And Weighting of pseudo Labels (I-CRAWL). Our PF-Net combines 2D and
3D convolutions to deal with CT volumes with large inter-slice spacing, and
uses multi-scale guided dense attention to segment complex PF lesions. For
semi-supervised learning, our I-CRAWL employs pixel-level uncertainty-based
confidence-aware refinement to improve the accuracy of pseudo labels of
unannotated images, and uses image-level uncertainty for confidence-based image
weighting to suppress low-quality pseudo labels in an iterative training
process. Extensive experiments with CT scans of Rhesus Macaques with
radiation-induced PF showed that: 1) PF-Net achieved higher segmentation
accuracy than existing 2D, 3D and 2.5D neural networks, and 2) I-CRAWL
outperformed state-of-the-art semi-supervised learning methods for the PF
lesion segmentation task. Our method has a potential to improve the diagnosis
of PF and clinical assessment of side effects of radiotherapy for lung cancers.Comment: 12 pages, 9 figures. Submitted to IEEE TM