167 research outputs found
Fine-Grained Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation
The domain adaptation approach has gained significant acceptance in
transferring styles across various vendors and centers, along with filling the
gaps in modalities. However, multi-center application faces the challenge of
the difficulty of domain adaptation due to their intra-domain differences. We
focus on introducing a fine-grained unsupervised framework for domain
adaptation to facilitate cross-modality segmentation of vestibular schwannoma
(VS) and cochlea. We propose to use a vector to control the generator to
synthesize a fake image with given features. And then, we can apply various
augmentations to the dataset by searching the feature dictionary. The diversity
augmentation can increase the performance and robustness of the segmentation
model. On the CrossMoDA validation phase Leaderboard, our method received a
mean Dice score of 0.765 and 0.836 on VS and cochlea, respectively
Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction based on Semi-Supervised Contrastive Learning
Domain adaptation has been widely adopted to transfer styles across
multi-vendors and multi-centers, as well as to complement the missing
modalities. In this challenge, we proposed an unsupervised domain adaptation
framework for cross-modality vestibular schwannoma (VS) and cochlea
segmentation and Koos grade prediction. We learn the shared representation from
both ceT1 and hrT2 images and recover another modality from the latent
representation, and we also utilize proxy tasks of VS segmentation and brain
parcellation to restrict the consistency of image structures in domain
adaptation. After generating missing modalities, the nnU-Net model is utilized
for VS and cochlea segmentation, while a semi-supervised contrastive learning
pre-train approach is employed to improve the model performance for Koos grade
prediction. On CrossMoDA validation phase Leaderboard, our method received rank
4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with
Macro-Average Mean Square Error of 0.3941. Our code is available at
https://github.com/fiy2W/cmda2022.superpolymerization
deep learning based segmentation of breast masses in dedicated breast ct imaging radiomic feature stability between radiologists and artificial intelligence
Abstract A deep learning (DL) network for 2D-based breast mass segmentation in unenhanced dedicated breast CT images was developed and validated, and its robustness in radiomic feature stability and diagnostic performance compared to manual annotations of multiple radiologists was investigated. 93 mass-like lesions were extensively augmented and used to train the network (n = 58 masses), which was then tested (n = 35 masses) against manual ground truth of a qualified breast radiologist with experience in breast CT imaging using the Conformity coefficient (with a value equal to 1 indicating a perfect performance). Stability and diagnostic power of 672 radiomic descriptors were investigated between the computerized segmentation, and 4 radiologists' annotations for the 35 test set cases. Feature stability and diagnostic performance in the discrimination between benign and malignant cases were quantified using intraclass correlation (ICC) and multivariate analysis of variance (MANOVA), performed for each segmentation case (4 radiologists and DL algorithm). DL-based segmentation resulted in a Conformity of 0.85 ± 0.06 against the annotated ground truth. For the stability analysis, although modest agreement was found among the four annotations performed by radiologists (Conformity 0.78 ± 0.03), over 90% of all radiomic features were found to be stable (ICC>0.75) across multiple segmentations. All MANOVA analyses were statistically significant (p ≤ 0.05), with all dimensions equal to 1, and Wilks' lambda ≤0.35. In conclusion, DL-based mass segmentation in dedicated breast CT images can achieve high segmentation performance, and demonstrated to provide stable radiomic descriptors with comparable discriminative power in the classification of benign and malignant tumors to expert radiologist annotation
Automated 3-D Ultrasound Elastography of the Breast:An In Vivo Validation Study
Objective: Studies have indicated that adding 2-D quasi-static elastography to B-mode ultrasound imaging improved the specificity for malignant lesion detection, as malignant lesions are often stiffer (increased strain ratio) compared with benign lesions. This method is limited by its user dependency and so unsuitable for breast screening. To overcome this limitation, we implemented quasi-static elastography in an automated breast volume scanner (ABVS), which is an operator-independent 3-D ultrasound system and is especially useful for screening women with dense breasts. The study aim was to investigate if 3-D quasi-static elastography implemented in a clinically used ABVS can discriminate between benign and malignant breast lesions. Methods: Volumetric breast ultrasound radiofrequency data sets of 82 patients were acquired before and after automated transducer lifting. Lesions were annotated and strain was calculated using an in-house-developed strain algorithm. Two strain ratio types were calculated per lesion: using axial and maximal principal strain (i.e., strain in dominant direction). Results: Forty-four lesions were detected: 9 carcinomas, 23 cysts and 12 other benign lesions. A significant difference was found between malignant (median: 1.7, range: [1.0–3.2]) and benign (1.0, [0.6–1.9]) using maximal principal strain ratios. Axial strain ratio did not reveal a significant difference between benign (0.6, [–12.7 to 4.9]) and malignant lesions (0.8, [–3.5 to 5.1]). Conclusion: Three-dimensional strain imaging was successfully implemented on a clinically used ABVS to obtain, visualize and analyze in vivo strain images in three dimensions. Results revealed that maximal principal strain ratios are significantly increased in malignant compared with benign lesions.</p
DisAsymNet: Disentanglement of Asymmetrical Abnormality on Bilateral Mammograms using Self-adversarial Learning
Asymmetry is a crucial characteristic of bilateral mammograms (Bi-MG) when
abnormalities are developing. It is widely utilized by radiologists for
diagnosis. The question of 'what the symmetrical Bi-MG would look like when the
asymmetrical abnormalities have been removed ?' has not yet received strong
attention in the development of algorithms on mammograms. Addressing this
question could provide valuable insights into mammographic anatomy and aid in
diagnostic interpretation. Hence, we propose a novel framework, DisAsymNet,
which utilizes asymmetrical abnormality transformer guided self-adversarial
learning for disentangling abnormalities and symmetric Bi-MG. At the same time,
our proposed method is partially guided by randomly synthesized abnormalities.
We conduct experiments on three public and one in-house dataset, and
demonstrate that our method outperforms existing methods in abnormality
classification, segmentation, and localization tasks. Additionally,
reconstructed normal mammograms can provide insights toward better
interpretable visual cues for clinical diagnosis. The code will be accessible
to the public
Localizing the Recurrent Laryngeal Nerve via Ultrasound with a Bayesian Shape Framework
Tumor infiltration of the recurrent laryngeal nerve (RLN) is a contraindication for robotic thyroidectomy and can be difficult to detect via standard laryngoscopy. Ultrasound (US) is a viable alternative for RLN detection due to its safety and ability to provide real-time feedback. However, the tininess of the RLN, with a diameter typically less than 3mm, poses significant challenges to the accurate localization of the RLN. In this work, we propose a knowledge-driven framework for RLN localization, mimicking the standard approach surgeons take to identify the RLN according to its surrounding organs. We construct a prior anatomical model based on the inherent relative spatial relationships between organs. Through Bayesian shape alignment (BSA), we obtain the candidate coordinates of the center of a region of interest (ROI) that encloses the RLN. The ROI allows a decreased field of view for determining the refined centroid of the RLN using a dual-path identification network, based on multi-scale semantic information. Experimental results indicate that the proposed method achieves superior hit rates and substantially smaller distance errors compared with state-of-the-art methods
GSMorph: Gradient Surgery for cine-MRI Cardiac Deformable Registration
Deep learning-based deformable registration methods have been widely
investigated in diverse medical applications. Learning-based deformable
registration relies on weighted objective functions trading off registration
accuracy and smoothness of the deformation field. Therefore, they inevitably
require tuning the hyperparameter for optimal registration performance. Tuning
the hyperparameters is highly computationally expensive and introduces
undesired dependencies on domain knowledge. In this study, we construct a
registration model based on the gradient surgery mechanism, named GSMorph, to
achieve a hyperparameter-free balance on multiple losses. In GSMorph, we
reformulate the optimization procedure by projecting the gradient of similarity
loss orthogonally to the plane associated with the smoothness constraint,
rather than additionally introducing a hyperparameter to balance these two
competing terms. Furthermore, our method is model-agnostic and can be merged
into any deep registration network without introducing extra parameters or
slowing down inference. In this study, We compared our method with
state-of-the-art (SOTA) deformable registration approaches over two publicly
available cardiac MRI datasets. GSMorph proves superior to five SOTA
learning-based registration models and two conventional registration
techniques, SyN and Demons, on both registration accuracy and smoothness.Comment: Accepted at MICCAI 202
- …