384 research outputs found
Ensemble CNN Networks for GBM Tumors Segmentation Using Multi-parametric MRI
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworks namely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at https://hub.docker.com/r/razeineldin/deepseg21
Self-supervised iRegNet for the Registration of Longitudinal Brain MRI of Diffuse Glioma Patients
Reliable and accurate registration of patient-specific brain magnetic
resonance imaging (MRI) scans containing pathologies is challenging due to
tissue appearance changes. This paper describes our contribution to the
Registration of the longitudinal brain MRI task of the Brain Tumor Sequence
Registration Challenge 2022 (BraTS-Reg 2022). We developed an enhanced
unsupervised learning-based method that extends the iRegNet. In particular,
incorporating an unsupervised learning-based paradigm as well as several minor
modifications to the network pipeline, allows the enhanced iRegNet method to
achieve respectable results. Experimental findings show that the enhanced
self-supervised model is able to improve the initial mean median registration
absolute error (MAE) from 8.20 (7.62) mm to the lowest value of 3.51 (3.50) for
the training set while achieving an MAE of 2.93 (1.63) mm for the validation
set. Additional qualitative validation of this study was conducted through
overlaying pre-post MRI pairs before and after the de-formable registration.
The proposed method scored 5th place during the testing phase of the MICCAI
BraTS-Reg 2022 challenge. The docker image to reproduce our BraTS-Reg
submission results will be publicly available.Comment: Accepted in the MICCAI BraTS-Reg 2022 Challenge (as part of the
BrainLes workshop proceedings distributed by Springer LNCS
DeepSeg: Deep Neural Network Framework for Automatic Brain Tumor Segmentation using Magnetic Resonance FLAIR Images
Purpose: Gliomas are the most common and aggressive type of brain tumors due
to their infiltrative nature and rapid progression. The process of
distinguishing tumor boundaries from healthy cells is still a challenging task
in the clinical routine. Fluid-Attenuated Inversion Recovery (FLAIR) MRI
modality can provide the physician with information about tumor infiltration.
Therefore, this paper proposes a new generic deep learning architecture; namely
DeepSeg for fully automated detection and segmentation of the brain lesion
using FLAIR MRI data.
Methods: The developed DeepSeg is a modular decoupling framework. It consists
of two connected core parts based on an encoding and decoding relationship. The
encoder part is a convolutional neural network (CNN) responsible for spatial
information extraction. The resulting semantic map is inserted into the decoder
part to get the full resolution probability map. Based on modified U-Net
architecture, different CNN models such as Residual Neural Network (ResNet),
Dense Convolutional Network (DenseNet), and NASNet have been utilized in this
study.
Results: The proposed deep learning architectures have been successfully
tested and evaluated on-line based on MRI datasets of Brain Tumor Segmentation
(BraTS 2019) challenge, including s336 cases as training data and 125 cases for
validation data. The dice and Hausdorff distance scores of obtained
segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly.
Conclusion: This study showed successful feasibility and comparative
performance of applying different deep learning models in a new DeepSeg
framework for automated brain tumor segmentation in FLAIR MR images. The
proposed DeepSeg is open-source and freely available at
https://github.com/razeineldin/DeepSeg/.Comment: Accepted to International Journal of Computer Assisted Radiology and
Surger
Slicer-DeepSeg: Open-Source Deep Learning Toolkit for Brain Tumour Segmentation
Purpose
Computerized medical imaging processing assists neurosurgeons to localize tumours precisely. It plays a key role in recent image-guided neurosurgery. Hence, we developed a new open-source toolkit, namely Slicer-DeepSeg, for efficient and automatic brain tumour segmentation based on deep learning methodologies for aiding clinical brain research.
Methods
Our developed toolkit consists of three main components. First, Slicer-DeepSeg extends the 3D Slicer application and thus provides support for multiple data input/ output data formats and 3D visualization libraries. Second, Slicer core modules offer powerful image processing and analysis utilities. Third, the Slicer-DeepSeg extension provides a customized GUI for brain tumour segmentation using deep learning-based methods.
Results
The developed Slicer-DeepSeg was validated using a public dataset of high-grade glioma patients. The results showed that our proposed platform’s performance considerably outperforms other 3D Slicer cloud-based approaches.
Conclusions
Developed Slicer-DeepSeg allows the development of novel AI-assisted medical applications in neurosurgery. Moreover, it can enhance the outcomes of computer-aided diagnosis of brain tumours. Open-source Slicer-DeepSeg is available at github.com/razeineldin/Slicer-DeepSeg
Deep automatic segmentation of brain tumours in interventional ultrasound data
Intraoperative imaging can assist neurosurgeons to define brain tumours and other surrounding brain structures. Interventional ultrasound (iUS) is a convenient modality with fast scan times. However, iUS data may suffer from noise and artefacts which limit their interpretation during brain surgery. In this work, we use two deep learning networks, namely UNet and TransUNet, to make automatic and accurate segmentation of the brain tumour in iUS data. Experiments were conducted on a dataset of 27iUS volumes. The outcomes show that using a transformer with UNet is advantageous providing an efficient segmentation modelling long-range dependencies between each iUS image. In particular, the enhanced TransUNet was able to predict cavity segmentation in iUS data with an inference rate of more than 125 FPS.These promising results suggest that deep learning networks can be successfully deployed to assist neurosurgeons in the operating room
Mechanistic within-host models of the asexual; Plasmodium falciparum; infection: a review and analytical assessment
BACKGROUND: Malaria blood-stage infection length and intensity are important drivers of disease and transmission; however, the underlying mechanisms of parasite growth and the host's immune response during infection remain largely unknown. Over the last 30 years, several mechanistic mathematical models of malaria parasite within-host dynamics have been published and used in malaria transmission models. METHODS: Mechanistic within-host models of parasite dynamics were identified through a review of published literature. For a subset of these, model code was reproduced and descriptive statistics compared between the models using fitted data. Through simulation and model analysis, key features of the models were compared, including assumptions on growth, immune response components, variant switching mechanisms, and inter-individual variability. RESULTS: The assessed within-host malaria models generally replicate infection dynamics in malaria-naive individuals. However, there are substantial differences between the model dynamics after disease onset, and models do not always reproduce late infection parasitaemia data used for calibration of the within host infections. Models have attempted to capture the considerable variability in parasite dynamics between individuals by including stochastic parasite multiplication rates; variant switching dynamics leading to immune escape; variable effects of the host immune responses; or via probabilistic events. For models that capture realistic length of infections, model representations of innate immunity explain early peaks in infection density that cause clinical symptoms, and model representations of antibody immune responses control the length of infection. Models differed in their assumptions concerning variant switching dynamics, reflecting uncertainty in the underlying mechanisms of variant switching revealed by recent clinical data during early infection. Overall, given the scarce availability of the biological evidence there is limited support for complex models. CONCLUSIONS: This study suggests that much of the inter-individual variability observed in clinical malaria infections has traditionally been attributed in models to random variability, rather than mechanistic disease dynamics. Thus, it is proposed that newly developed models should assume simple immune dynamics that minimally capture mechanistic understandings and avoid over-parameterization and large stochasticity which inaccurately represent unknown disease mechanisms
Explainability of deep neural networks for MRI analysis of brain tumors
Purpose
Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice.
Methods
In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN.
Conclusion
Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI
Cellulose lattice strains and stress transfer in native and delignified wood
Small specimens of spruce wood with different degrees of delignification were studied using in-situ tensile tests and simultaneous synchrotron X-ray diffraction to reveal the effect of delignification and densification on their tensile properties at relative humidity of 70–80 %. In addition to mechanical properties, these analyses yield the ratio of strains in the cellulose crystals and in the bulk, which reflects the stress-transfer to crystalline cellulose. While the specific modulus of elasticity slightly increases from native wood by partial or complete delignification, the lattice strain ratio does not show a significant change. This could indicate a compensatory effect from the decomposition of the amorphous matrix by delignification and from a tighter packing of cellulose crystals that would increase the stress transfer. The reduced strain to failure and maximum lattice strain of delignified specimens suggests that the removal of lignin affects the stress-strain behavior with fracture at lower strain levels
iRegNet: Non-rigid Registration of MRI to Interventional US for Brain-Shift Compensation using Convolutional Neural Networks
Accurate and safe neurosurgical intervention can be affected by intra-operative tissue deformation, known as brain-shift. In this study, we propose an automatic, fast, and accurate deformable method, called iRegNet, for registering pre-operative magnetic resonance images to intra-operative ultrasound volumes to compensate for brain-shift. iRegNet is a robust end-to-end deep learning approach for the non-linear registration of MRI-iUS images in the context of image-guided neurosurgery. Pre-operative MRI (as moving image) and iUS (as fixed image) are first appended to our convolutional neural network, after which a non-rigid transformation field is estimated. The MRI image is then transformed using the output displacement field to the iUS coordinate system. Extensive experiments have been conducted on two multi-location databases, which are the BITE and the RESECT. Quantitatively, iRegNet reduced the mean landmark errors from pre-registration value of (4.18 ± 1.84 and 5.35 ± 4.19 mm) to the lowest value of (1.47 ± 0.61 and 0.84 ± 0.16 mm) for the BITE and RESECT datasets, respectively. Additional qualitative validation of this study was conducted by two expert neurosurgeons through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that our proposed iRegNet is fast and achieves state-of-the-art accuracies outperforming state-of-the-art approaches. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance
Multivariate risks and depth-trimmed regions
We describe a general framework for measuring risks, where the risk measure
takes values in an abstract cone. It is shown that this approach naturally
includes the classical risk measures and set-valued risk measures and yields a
natural definition of vector-valued risk measures. Several main constructions
of risk measures are described in this abstract axiomatic framework.
It is shown that the concept of depth-trimmed (or central) regions from the
multivariate statistics is closely related to the definition of risk measures.
In particular, the halfspace trimming corresponds to the Value-at-Risk, while
the zonoid trimming yields the expected shortfall. In the abstract framework,
it is shown how to establish a both-ways correspondence between risk measures
and depth-trimmed regions. It is also demonstrated how the lattice structure of
the space of risk values influences this relationship.Comment: 26 pages. Substantially revised version with a number of new results
adde
- …