1,912 research outputs found
Design of a multimodal rendering system
This paper addresses the rendering of aligned regular multimodal
datasets. It presents a general framework of multimodal data fusion
that includes several data merging methods. We also analyze the
requirements of a rendering system able to provide these different
fusion methods. On the basis of these requirements, we propose a novel
design for a multimodal rendering system. The design has been
implemented and proved showing to be efficient and flexible.Postprint (published version
A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy
The difusion and perfusion magnetic resonance (MR) images can provide functional information about
tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature
fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric
functional MR images including apparent difusion coefcient (ADC), fractional anisotropy (FA) and
relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model
was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result
of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated
automatically. The auto-segmentations of tumour in structural MR images were added in fnal autosegmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for
nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs
showed that, the mean volume diference was 8.69% (±5.62%); the mean Dice’s similarity coefcient
(DSC) was 0.88 (±0.02); the mean sensitivity and specifcity of auto-segmentation was 0.87 (±0.04)
and 0.98 (±0.01) respectively. High accuracy and efciency can be achieved with the new method,
which shows potential of utilizing functional multi-parametric MR images for target defnition in
precision radiation treatment planning for patients with gliomas
Machine Learning Models to automate Radiotherapy Structure Name Standardization
Structure name standardization is a critical problem in Radiotherapy planning systems to correctly identify the various Organs-at-Risk, Planning Target Volumes and `Other\u27 organs for monitoring present and future medications. Physicians often label anatomical structure sets in Digital Imaging and Communications in Medicine (DICOM) images with nonstandard random names. Hence, the standardization of these names for the Organs at Risk (OARs), Planning Target Volumes (PTVs), and `Other\u27 organs is a vital problem. Prior works considered traditional machine learning approaches on structure sets with moderate success. We compare both traditional methods and deep neural network-based approaches on the multimodal vision-language prostate cancer patient data, compiled from the radiotherapy centers of the US Veterans Health Administration (VHA) and Virginia Commonwealth University (VCU) for structure name standardization. These de-identified data comprise 16,290 prostate structures. Our method integrates the multimodal textual and imaging data with Convolutional Neural Network (CNN)-based deep learning approaches such as CNN, Visual Geometry Group (VGG) network, and Residual Network (ResNet) and shows improved results in prostate radiotherapy structure name standardization. Our proposed deep neural network-based approach on the multimodal vision-language prostate cancer patient data provides state-of-the-art results for structure name standardization. Evaluation with macro-averaged F1 score shows that our CNN model with single-modal textual data usually performs better than previous studies. We also experimented with various combinations of multimodal data (masked images, masked dose) besides textual data. The models perform well on textual data alone, while the addition of imaging data shows that deep neural networks achieve better performance using information present in other modalities. Our pipeline can successfully standardize the Organs-at-Risk and the Planning Target Volumes, which are of utmost interest to the clinicians and simultaneously, performs very well on the `Other\u27 organs. We performed comprehensive experiments by varying input data modalities to show that using masked images and masked dose data with text outperforms the combination of other input modalities. We also undersampled the majority class, i.e., the `Other\u27 class, at different degrees and conducted extensive experiments to demonstrate that a small amount of majority class undersampling is essential for superior performance. Overall, our proposed integrated, deep neural network-based architecture for prostate structure name standardization can solve several challenges associated with multimodal data. The VGG network on the masked image-dose data combined with CNNs on the text data performs the best and presents the state-of-the-art in this domain
Organ-focused mutual information for nonrigid multimodal registration of liver CT and Gd–EOB–DTPA-enhanced MRI
Accurate detection of liver lesions is of great importance in hepatic surgery planning. Recent studies have shown that the detection rate of liver lesions is significantly higher in gadoxetic acid-enhanced magnetic resonance imaging (Gd–EOB–DTPA-enhanced MRI) than in contrast-enhanced portal-phase computed tomography (CT); however, the latter remains essential because of its high specificity, good performance in estimating liver volumes and better vessel visibility. To characterize liver lesions using both the above image modalities, we propose a multimodal nonrigid registration framework using organ-focused mutual information (OF-MI). This proposal tries to improve mutual information (MI) based registration by adding spatial information, benefiting from the availability of expert liver segmentation in clinical protocols. The incorporation of an additional information channel containing liver segmentation information was studied. A dataset of real clinical images and simulated images was used in the validation process. A Gd–EOB–DTPA-enhanced MRI simulation framework is presented. To evaluate results, warping index errors were calculated for the simulated data, and landmark-based and surface-based errors were calculated for the real data. An improvement of the registration accuracy for OF-MI as compared with MI was found for both simulated and real datasets. Statistical significance of the difference was tested and confirmed in the simulated dataset (p < 0.01)
- …