84 research outputs found
Learning Site-specific Styles for Multi-institutional Unsupervised Cross-modality Domain Adaptation
Unsupervised cross-modality domain adaptation is a challenging task in
medical image analysis, and it becomes more challenging when source and target
domain data are collected from multiple institutions. In this paper, we present
our solution to tackle the multi-institutional unsupervised domain adaptation
for the crossMoDA 2023 challenge. First, we perform unpaired image translation
to translate the source domain images to the target domain, where we design a
dynamic network to generate synthetic target domain images with controllable,
site-specific styles. Afterwards, we train a segmentation model using the
synthetic images and further reduce the domain gap by self-training. Our
solution achieved the 1st place during both the validation and testing phases
of the challenge. The code repository is publicly available at
https://github.com/MedICL-VU/crossmoda2023.Comment: crossMoDA 2023 challenge 1st place solutio
Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea Segmentation via Semi-supervised Learning and Label Fusion
Automatic methods to segment the vestibular schwannoma (VS) tumors and the
cochlea from magnetic resonance imaging (MRI) are critical to VS treatment
planning. Although supervised methods have achieved satisfactory performance in
VS segmentation, they require full annotations by experts, which is laborious
and time-consuming. In this work, we aim to tackle the VS and cochlea
segmentation problem in an unsupervised domain adaptation setting. Our proposed
method leverages both the image-level domain alignment to minimize the domain
divergence and semi-supervised training to further boost the performance.
Furthermore, we propose to fuse the labels predicted from multiple models via
noisy label correction. In the MICCAI 2021 crossMoDA challenge, our results on
the final evaluation leaderboard showed that our proposed method has achieved
promising segmentation performance with mean dice score of 79.9% and 82.5% and
ASSD of 1.29 mm and 0.18 mm for VS tumor and cochlea, respectively. The cochlea
ASSD achieved by our method has outperformed all other competing methods as
well as the supervised nnU-Net.Comment: Accepted by MICCAI 2021 BrainLes Workshop. arXiv admin note:
substantial text overlap with arXiv:2109.0627
Parameter Optimization for Image Denoising Based on Block Matching and 3D Collaborative Filtering
Clinical MRI images are generally corrupted by random noise during acquisition with blurred subtle structure features. Many denoising methods have been proposed to remove noise from corrupted images at the expense of distorted structure features. Therefore, there is always compromise between removing noise and preserving structure information for denoising methods. For a specific denoising method, it is crucial to tune it so that the best tradeoff can be obtained. In this paper, we define several cost functions to assess the quality of noise removal and that of structure information preserved in the denoised image. Strength Pareto Evolutionary Algorithm 2 (SPEA2) is utilized to simultaneously optimize the cost functions by modifying parameters associated with the denoising methods. The effectiveness of the algorithm is demonstrated by applying the proposed optimization procedure to enhance the image denoising results using block matching and 3D collaborative filtering. Experimental results show that the proposed optimization algorithm can significantly improve the performance of image denoising methods in terms of noise removal and structure information preservation
Baseline Photos and Confident Annotation Improve Automated Detection of Cutaneous Graft-Versus-Host Disease
Cutaneous erythema is used in diagnosis and response assessment of cutaneous chronic graft-versus-host disease (cGVHD). The development of objective erythema evaluation methods remains a challenge. We used a pre-trained neural network to segment cGVHD erythema by detecting changes relative to a patientās registered baseline photo. We fixed this change detection algorithm on human annotations from a single photo pair, by using either a traditional approach or by marking definitely affected (āDo Not Missā, DNM) and definitely unaffected skin (āDo Not Includeā, DNI). The fixed algorithm was applied to each of the remaining 47 test photo pairs from six follow-up sessions of one patient. We used both the Dice index and the opinion of two board-certified dermatologists to evaluate the algorithm performance. The change detection algorithm correctly assigned 80% of the pixels, regardless of whether it was fixed on traditional (median accuracy: 0.77, interquartile range 0.62ā0.87) or DNM/DNI segmentations (0.81, 0.65ā0.89). When the algorithm was fixed on markings by different annotators, the DNM/DNI achieved more consistent outputs (median Dice indices: 0.94ā0.96) than the traditional method (0.73ā0.81). Compared to viewing only rash photos, the addition of baseline photos improved the reliability of dermatologistsā scoring. The inter-rater intraclass correlation coefficient increased from 0.19 (95% confidence interval lower bound: 0.06) to 0.51 (lower bound: 0.35). In conclusion, a change detection algorithm accurately assigned erythema in longitudinal photos of cGVHD. The reliability was significantly improved by exclusively using confident human segmentations to fix the algorithm. Baseline photos improved the agreement among two dermatologists in assessing algorithm performance
Anatomical texture patterns identify cerebellar distinctions between essential tremor and Parkinson's disease
Voxel-based morphometry is an established technique to study focal structural brain differences in neurologic disease. More recently, texture-based analysis methods have enabled a pattern-based assessment of group differences, at the patch level rather than at the voxel level, allowing a more sensitive localization of structural differences between patient populations. In this study, we propose a texture-based approach to identify structural differences between the cerebellum of patients with Parkinson's disease (n =???280) and essential tremor (n =???109). We analyzed anatomical differences of the cerebellum among patients using two features: T1-weighted MRI intensity, and a texture-based similarity feature. Our results show anatomical differences between groups that are localized to the inferior part of the cerebellar cortex. Both the T1-weighted intensity and texture showed differences in lobules VIII and IX, vermis VIII and IX, and middle peduncle, but the texture analysis revealed additional differences in the dentate nucleus, lobules VI and VII, vermis VI and VII. This comparison emphasizes how T1-weighted intensity and texture-based methods can provide a complementary anatomical structure analysis. While texture-based similarity shows high sensitivity for gray matter differences, T1-weighted intensity shows sensitivity for the detection of white matter differences
CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwannoma and Cochlea Segmentation
Domain Adaptation (DA) has recently raised strong interests in the medical
imaging community. While a large variety of DA techniques has been proposed for
image segmentation, most of these techniques have been validated either on
private datasets or on small publicly available datasets. Moreover, these
datasets mostly addressed single-class problems. To tackle these limitations,
the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in
conjunction with the 24th International Conference on Medical Image Computing
and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large
and multi-class benchmark for unsupervised cross-modality DA. The challenge's
goal is to segment two key brain structures involved in the follow-up and
treatment planning of vestibular schwannoma (VS): the VS and the cochleas.
Currently, the diagnosis and surveillance in patients with VS are performed
using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in
using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore,
we created an unsupervised cross-modality segmentation benchmark. The training
set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105).
The aim was to automatically perform unilateral VS and bilateral cochlea
segmentation on hrT2 as provided in the testing set (N=137). A total of 16
teams submitted their algorithm for the evaluation phase. The level of
performance reached by the top-performing teams is strikingly high (best median
Dice - VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice -
VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an
image-to-image translation approach to transform the source-domain images into
pseudo-target-domain images. A segmentation network was then trained using
these generated images and the manual annotations provided for the source
image.Comment: Submitted to Medical Image Analysi
- ā¦