6 research outputs found
Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation
We propose a new deep learning method for tumour segmentation when dealing
with missing imaging modalities. Instead of producing one network for each
possible subset of observed modalities or using arithmetic operations to
combine feature maps, our hetero-modal variational 3D encoder-decoder
independently embeds all observed modalities into a shared latent
representation. Missing data and tumour segmentation can be then generated from
this embedding. In our scenario, the input is a random subset of modalities. We
demonstrate that the optimisation problem can be seen as a mixture sampling. In
addition to this, we introduce a new network architecture building upon both
the 3D U-Net and the Multi-Modal Variational Auto-Encoder (MVAE). Finally, we
evaluate our method on BraTS2018 using subsets of the imaging modalities as
input. Our model outperforms the current state-of-the-art method for dealing
with missing modalities and achieves similar performance to the subset-specific
equivalent networks.Comment: Accepted at MICCAI 201
Modality Completion via Gaussian Process Prior Variational Autoencoders for Multi-Modal Glioma Segmentation
In large studies involving multi protocol Magnetic Resonance Imaging (MRI),
it can occur to miss one or more sub-modalities for a given patient owing to
poor quality (e.g. imaging artifacts), failed acquisitions, or hallway
interrupted imaging examinations. In some cases, certain protocols are
unavailable due to limited scan time or to retrospectively harmonise the
imaging protocols of two independent studies. Missing image modalities pose a
challenge to segmentation frameworks as complementary information contributed
by the missing scans is then lost. In this paper, we propose a novel model,
Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute
one or more missing sub-modalities for a patient scan. MGP-VAE can leverage the
Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the
subjects/patients and sub-modalities correlations. Instead of designing one
network for each possible subset of present sub-modalities or using frameworks
to mix feature maps, missing data can be generated from a single model based on
all the available samples. We show the applicability of MGP-VAE on brain tumor
segmentation where either, two, or three of four sub-modalities may be missing.
Our experiments against competitive segmentation baselines with missing
sub-modality on BraTS'19 dataset indicate the effectiveness of the MGP-VAE
model for segmentation tasks.Comment: Accepted in MICCAI 202
Modality Attention and Sampling Enables Deep Learning with Heterogeneous Marker Combinations in Fluorescence Microscopy
Fluorescence microscopy allows for a detailed inspection of cells, cellular
networks, and anatomical landmarks by staining with a variety of
carefully-selected markers visualized as color channels. Quantitative
characterization of structures in acquired images often relies on automatic
image analysis methods. Despite the success of deep learning methods in other
vision applications, their potential for fluorescence image analysis remains
underexploited. One reason lies in the considerable workload required to train
accurate models, which are normally specific for a given combination of
markers, and therefore applicable to a very restricted number of experimental
settings. We herein propose Marker Sampling and Excite, a neural network
approach with a modality sampling strategy and a novel attention module that
together enable () flexible training with heterogeneous datasets with
combinations of markers and () successful utility of learned models on
arbitrary subsets of markers prospectively. We show that our single neural
network solution performs comparably to an upper bound scenario where an
ensemble of many networks is na\"ively trained for each possible marker
combination separately. In addition, we demonstrate the feasibility of our
framework in high-throughput biological analysis by revising a recent
quantitative characterization of bone marrow vasculature in 3D confocal
microscopy datasets. Not only can our work substantially ameliorate the use of
deep learning in fluorescence microscopy analysis, but it can also be utilized
in other fields with incomplete data acquisitions and missing modalities.Comment: 17 pages, 5 figures, 3 pages supplement (3 figures
Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation p
We propose a new deep learning method for tumour segmentation when dealing
with missing imaging modalities. Instead of producing one network for each
possible subset of observed modalities or using arithmetic operations to
combine feature maps, our hetero-modal variational 3D encoder-decoder
independently embeds all observed modalities into a shared latent
representation. Missing data and tumour segmentation can be then generated from
this embedding. In our scenario, the input is a random subset of modalities. We
demonstrate that the optimisation problem can be seen as a mixture sampling. In
addition to this, we introduce a new network architecture building upon both
the 3D U-Net and the Multi-Modal Variational Auto-Encoder (MVAE). Finally, we
evaluate our method on BraTS2018 using subsets of the imaging modalities as
input. Our model outperforms the current state-of-the-art method for dealing
with missing modalities and achieves similar performance to the subset-specific
equivalent networks.Comment: Accepted at MICCAI 201
A Review on Brain Tumor Segmentation Based on Deep Learning Methods with Federated Learning Techniques
Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 [electronic resource] : 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part II /
The six-volume set LNCS 11764, 11765, 11766, 11767, 11768, and 11769 constitutes the refereed proceedings of the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019, held in Shenzhen, China, in October 2019. The 539 revised full papers presented were carefully reviewed and selected from 1730 submissions in a double-blind review process. The papers are organized in the following topical sections: Part I: optical imaging; endoscopy; microscopy. Part II: image segmentation; image registration; cardiovascular imaging; growth, development, atrophy and progression. Part III: neuroimage reconstruction and synthesis; neuroimage segmentation; diffusion weighted magnetic resonance imaging; functional neuroimaging (fMRI); miscellaneous neuroimaging. Part IV: shape; prediction; detection and localization; machine learning; computer-aided diagnosis; image reconstruction and synthesis. Part V: computer assisted interventions; MIC meets CAI. Part VI: computed tomography; X-ray imaging.Image Segmentation -- Searching Learning Strategy with Reinforcement Learning for 3D Medical Image Segmentation -- Comparative Evaluation of Hand-Engineered and Deep-Learned Features for Neonatal Hip Bone Segmentation in Ultrasound -- Unsupervised Quality Control of Image Segmentation based on Bayesian Learning -- One Network To Segment Them All: A General, Lightweight System for Accurate 3D Medical Image Segmentation -- 'Project & Excite' Modules for Segmentation of Volumetric Medical Scans -- Assessing Reliability and Challenges of Uncertainty Estimations for Medical Image Segmentation -- Learning Cross-Modal Deep Representations for Multi-Modal MR Image Segmentation -- Extreme Points Derived Confidence Map as a Cue For Class-Agnostic Segmentation Using Deep Neural Network -- Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation -- Instance Segmentation from Volumetric Biomedical Images without Voxel-Wise Labeling -- Optimizing the Dice Score and Jaccard Index for Medical Image Segmentation: Theory & Practice -- Dual Adaptive Pyramid Network for Cross-Stain Histopathology Image Segmentation -- HD-Net: Hybrid Discriminative Network for Prostate Segmentation in MR Images -- PHiSeg: Capturing Uncertainty in Medical Image Segmentation -- Neural Style Transfer Improves 3D Cardiovascular MR Image Segmentation on Inconsistent Data -- Supervised Uncertainty Quantification for Segmentation with Multiple Annotations -- 3D Tiled Convolution for Effective Segmentation of Volumetric Medical Images -- Hyper-Pairing Network for Multi-Phase Pancreatic Ductal Adenocarcinoma Segmentation -- Statistical intensity- and shape-modeling to automate cerebrovascular segmentation from TOF-MRA data -- Segmentation of Vessels in Ultra High Frequency Ultrasound Sequences using Contextual Memory -- Accurate Esophageal Gross Tumor Volume Segmentation in PET/CT using Two-Stream Chained 3D Deep Network Fusion -- Mixed-Supervised Dual-Network for Medical Image Segmentation -- Fully Automated Pancreas Segmentation with Two-stage 3D Convolutional Neural Networks -- Globally Guided Progressive Fusion Network for 3D Pancreas Segmentation -- Automatic Segmentation of Muscle Tissue and Inter-muscular Fat in Thigh and Calf MRI Images -- Resource Optimized Neural Architecture Search for 3D Medical Image Segmentation -- Radiomics-guided GAN for Segmentation of Liver Tumor without Contrast Agents -- Liver Segmentation in Magnetic Resonance Imaging via Mean Shape Fitting with Fully Convolutional Neural Networks -- Unsupervised Domain Adaptation via Disentangled Representations: Application to Cross-Modality Liver Segmentation -- Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss -- Learning Shape Representation on Sparse Point Clouds for Volumetric Image Segmentation -- Collaborative Multi-agent Learning for MR Knee Articular Cartilage Segmentation -- 3D U2-Net: A 3D Universal U-Net for Multi-Domain Medical Image Segmentation -- Impact of Adversarial Examples on Deep Learning Segmentation Models -- Multi-Resolution Path CNN with Deep Supervision for Intervertebral Disc Localization and Segmentation -- Automatic paraspinal muscle segmentation in patients with lumbar pathology using deep convolutional neural network -- Constrained Domain Adaptation for Segmentation -- Image Registration -- Image-and-Spatial Transformer Networks for Structure-Guided Image Registration -- Probabilistic Multilayer Regularization Network for Unsupervised 3D Brain Image Registration -- A deep learning approach to MR-less spatial normalization for tau PET images -- TopAwaRe: Topology-Aware Registration -- Multimodal Data Registration for Brain Structural Association Networks -- Dual-Stream Pyramid Registration Network -- A Cooperative Autoencoder for Population-Based Regularization of CNN Image Registration -- Conditional Segmentation in Lieu of Image Registration -- On the applicability of registration uncertainty -- DeepAtlas: Joint Semi-Supervised Learning of Image Registration and Segmentation -- Linear Time Invariant Model based Motion Correction (LiMo-Moco) of Dynamic Radial Contrast Enhanced MRI -- Incompressible image registration using divergence-conforming B-splines -- Cardiovascular Imaging -- Direct Quantification for Coronary Artery Stenosis Using Multiview Learning -- Bayesian Optimization on Large Graphs via a Graph Convolutional Generative Model: Application in Cardiac Model Personalization -- Discriminative Coronary Artery Tracking via 3D CNN in Cardiac CT Angiography -- Multi-modality Whole-Heart and Great Vessel Segmentation in Congenital Heart Disease using Deep Neural Networks and Graph Matching -- Harmonic Balance Techniques in Cardiovascular Fluid Mechanics -- Deep learning within a priori temporal feature spaces for large-scale dynamic MR image reconstruction: Application to 5-D cardiac MR Multitasking -- k-t NEXT: Dynamic MR Image Reconstruction Exploiting Spatio-temporal Correlations -- Model-based reconstruction for highly accelerated first-pass perfusion cardiac MRI -- Learning Shape Priors for Robust Cardiac MR Segmentation from Multi-view images -- Right Ventricle Segmentation in Short-Axis MRI Using A Shape Constrained Dense Connected U-net -- Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction -- A Fine-Grain Error Map Prediction and Segmentation Quality Assessment Framework for Whole-Heart Segmentation -- Cardiac Segmentation from LGE MRI Using Deep Neural Network Incorporating Shape and Spatial Priors -- Curriculum semi-supervised segmentation -- A Multi-modal Network for Cardiomyopathy Death Risk Prediction with CMR Images and Clinical Information -- 3D Cardiac Shape Prediction with Deep Neural Networks: Simultaneous Use of Images and Patient Metadata -- Discriminative Consistent Domain Generation for Semi-supervised Learning -- Uncertainty-aware Self-ensembling Model for Semi-supervised 3D Left Atrium Segmentation -- MSU-Net: Multiscale Statistical U-Net for Real-time 3D Cardiac MRI Video Segmentation -- The Domain Shift Problem of Medical Image Segmentation and Vendor-Adaptation by Unet-GAN -- Cardiac MRI Segmentation with Strong Anatomical Guarantees -- Decompose-and-Integrate Learning for Multi-class Segmentation in Medical Images -- Missing Slice Imputation in Population CMR Imaging via Conditional Generative Adversarial Nets -- Unsupervised Standard Plane Synthesis in Population Cine MRI via Cycle-Consistent Adversarial Networks -- Data Efficient Unsupervised Domain Adaptation for Cross-Modality Image Segmentation -- Recurrent Aggregation Learning for Multi-View Echocardiographic Sequences Segmentation -- Echocardiography View Classification Using Quality Transfer Star Generative Adversarial Networks -- Dual-view Joint Estimation of Left Ventricular Ejection Fraction with Uncertainty Modelling in Echocardiograms -- Frame Rate Up-Conversion in Echocardiography Using a Conditioned Variational Autoencoder and Generative Adversarial Model -- Annotation-Free Cardiac Vessel Segmentation via Knowledge Transfer from Retinal Images -- DeepAAA: clinically applicable and generalizable detection of abdominal aortic aneurysm using deep learning -- Texture-based classification of significant stenosis in CCTA multi-view images of coronary arteries -- Fourier Spectral Dynamic Data Assimilation: Interlacing CFD with 4D flow MRI -- Quality Control-Driven Image Segmentation Towards Reliable Automatic Image Analysis in Large-Scale Cardiovascular Magnetic Resonance Aortic Cine Imaging -- HFA-Net: 3D Cardiovascular Image Segmentation with Asymmetrical Pooling and Content-Aware Fusion -- Spectral CT based training dataset generation and augmentation for conventional CT vascular segmentation -- Context-Aware Inductive Bias Learning for Vessel Border Detection in Multi-modal Intracoronary Imaging -- Growth, Development, Atrophy and Progression -- Neural parameters estimation for brain tumor growth modeling -- Learning-Guided Infinite Network Atlas Selection for Predicting Longitudinal Brain Network Evolution from a Single Observation -- Deep Probabilistic Modeling of Glioma Growth -- Surface-Volume Consistent Construction of Longitudinal Atlases for the Early Developing Brains -- Variational Autoencoder for Regression: Application to Brain Aging Analysis -- Early Development of Infant Brain Complex Network -- Revealing Developmental Regionalization of Infant Cerebral Cortex Based on Multiple Cortical Properties -- Continually Modeling Alzheimer's Disease Progression via Deep Multi-Order Preserving Weight Consolidation -- Disease Knowledge Transfer across Neurodegenerative Diseases.The six-volume set LNCS 11764, 11765, 11766, 11767, 11768, and 11769 constitutes the refereed proceedings of the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019, held in Shenzhen, China, in October 2019. The 539 revised full papers presented were carefully reviewed and selected from 1730 submissions in a double-blind review process. The papers are organized in the following topical sections: Part I: optical imaging; endoscopy; microscopy. Part II: image segmentation; image registration; cardiovascular imaging; growth, development, atrophy and progression. Part III: neuroimage reconstruction and synthesis; neuroimage segmentation; diffusion weighted magnetic resonance imaging; functional neuroimaging (fMRI); miscellaneous neuroimaging. Part IV: shape; prediction; detection and localization; machine learning; computer-aided diagnosis; image reconstruction and synthesis. Part V: computer assisted interventions; MIC meets CAI. Part VI: computed tomography; X-ray imaging