2,163 research outputs found
U-Net: Convolutional Networks for Biomedical Image Segmentation
There is large consent that successful training of deep networks requires
many thousand annotated training samples. In this paper, we present a network
and training strategy that relies on the strong use of data augmentation to use
the available annotated samples more efficiently. The architecture consists of
a contracting path to capture context and a symmetric expanding path that
enables precise localization. We show that such a network can be trained
end-to-end from very few images and outperforms the prior best method (a
sliding-window convolutional network) on the ISBI challenge for segmentation of
neuronal structures in electron microscopic stacks. Using the same network
trained on transmitted light microscopy images (phase contrast and DIC) we won
the ISBI cell tracking challenge 2015 in these categories by a large margin.
Moreover, the network is fast. Segmentation of a 512x512 image takes less than
a second on a recent GPU. The full implementation (based on Caffe) and the
trained networks are available at
http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .Comment: conditionally accepted at MICCAI 201
Automatically Designing CNN Architectures for Medical Image Segmentation
Deep neural network architectures have traditionally been designed and
explored with human expertise in a long-lasting trial-and-error process. This
process requires huge amount of time, expertise, and resources. To address this
tedious problem, we propose a novel algorithm to optimally find hyperparameters
of a deep network architecture automatically. We specifically focus on
designing neural architectures for medical image segmentation task. Our
proposed method is based on a policy gradient reinforcement learning for which
the reward function is assigned a segmentation evaluation utility (i.e., dice
index). We show the efficacy of the proposed method with its low computational
cost in comparison with the state-of-the-art medical image segmentation
networks. We also present a new architecture design, a densely connected
encoder-decoder CNN, as a strong baseline architecture to apply the proposed
hyperparameter search algorithm. We apply the proposed algorithm to each layer
of the baseline architectures. As an application, we train the proposed system
on cine cardiac MR images from Automated Cardiac Diagnosis Challenge (ACDC)
MICCAI 2017. Starting from a baseline segmentation architecture, the resulting
network architecture obtains the state-of-the-art results in accuracy without
performing any trial-and-error based architecture design approaches or close
supervision of the hyperparameters changes.Comment: Accepted to Machine Learning in Medical Imaging (MLMI 2018
PHT-bot: Deep-Learning based system for automatic risk stratification of COPD patients based upon signs of Pulmonary Hypertension
Chronic Obstructive Pulmonary Disease (COPD) is a leading cause of morbidity
and mortality worldwide. Identifying those at highest risk of deterioration
would allow more effective distribution of preventative and surveillance
resources. Secondary pulmonary hypertension is a manifestation of advanced
COPD, which can be reliably diagnosed by the main Pulmonary Artery (PA) to
Ascending Aorta (Ao) ratio. In effect, a PA diameter to Ao diameter ratio of
greater than 1 has been demonstrated to be a reliable marker of increased
pulmonary arterial pressure. Although clinically valuable and readily
visualized, the manual assessment of the PA and the Ao diameters is time
consuming and under-reported. The present study describes a non invasive method
to measure the diameters of both the Ao and the PA from contrast-enhanced chest
Computed Tomography (CT). The solution applies deep learning techniques in
order to select the correct axial slice to measure, and to segment both
arteries. The system achieves test Pearson correlation coefficient scores of
93% for the Ao and 92% for the PA. To the best of our knowledge, it is the
first such fully automated solution
An Adversarial Super-Resolution Remedy for Radar Design Trade-offs
Radar is of vital importance in many fields, such as autonomous driving,
safety and surveillance applications. However, it suffers from stringent
constraints on its design parametrization leading to multiple trade-offs. For
example, the bandwidth in FMCW radars is inversely proportional with both the
maximum unambiguous range and range resolution. In this work, we introduce a
new method for circumventing radar design trade-offs. We propose the use of
recent advances in computer vision, more specifically generative adversarial
networks (GANs), to enhance low-resolution radar acquisitions into higher
resolution counterparts while maintaining the advantages of the low-resolution
parametrization. The capability of the proposed method was evaluated on the
velocity resolution and range-azimuth trade-offs in micro-Doppler signatures
and FMCW uniform linear array (ULA) radars, respectively.Comment: Accepted in EUSIPCO 2019, 5 page
Impact of adversarial examples on deep learning models for biomedical image segmentation
Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples. Adversarial examples are carefully crafted samples that force machine learning models to make mistakes during testing time. These malicious samples have been shown to be highly effective in misguiding classification tasks. However, research on the influence of adversarial examples on segmentation is significantly lacking. Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models. Specifically, we expose the vulnerability of these models to adversarial examples by proposing the Adaptive Segmentation Mask Attack (ASMA). This novel algorithm makes it possible to craft targeted adversarial examples that come with (1) high intersection-over-union rates between the target adversarial mask and the prediction and (2) with perturbation that is, for the most part, invisible to the bare eye. We lay out experimental and visual evidence by showing results obtained for the ISIC skin lesion segmentation challenge and the problem of glaucoma optic disc segmentation. An implementation of this algorithm and additional examples can be found at https://github.com/utkuozbulak/adaptive-segmentation-mask-attack
Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans
The magnetic resonance (MR) analysis of brain tumors is widely used for
diagnosis and examination of tumor subregions. The overlapping area among the
intensity distribution of healthy, enhancing, non-enhancing, and edema regions
makes the automatic segmentation a challenging task. Here, we show that a
convolutional neural network trained on high-contrast images can transform the
intensity distribution of brain lesions in its internal subregions.
Specifically, a generative adversarial network (GAN) is extended to synthesize
high-contrast images. A comparison of these synthetic images and real images of
brain tumor tissue in MR scans showed significant segmentation improvement and
decreased the number of real channels for segmentation. The synthetic images
are used as a substitute for real channels and can bypass real modalities in
the multimodal brain tumor segmentation framework. Segmentation results on
BraTS 2019 dataset demonstrate that our proposed approach can efficiently
segment the tumor areas. In the end, we predict patient survival time based on
volumetric features of the tumor subregions as well as the age of each case
through several regression models
Recommended from our members
Monaural speech separation with deep learning using phase modelling and capsule networks
The removal of background noise from speech audio is a problem with high practical relevance. A variety of deep learning approaches have been applied to it in recent years, most of which operate on a magnitude spectrogram representation of a noisy recording to estimate the isolated speaking voice. This work investigates ways to include phase information, which is commonly discarded, firstly within a convolutional neural network (CNN) architecture, and secondly by applying capsule networks, to our knowledge the first time capsules have been used in source separation. We present a Circular Loss function, which takes into account the periodic nature of phase. Our results show that the inclusion of phase information leads to an improvement in the quality of speech separation. We also find that in our experiments convolutional neural networks outperform capsule networks at speech separation
KLUM@GTAP: Introducing biophysical aspects of land-use decisions into a general equilibrium model: A coupling experiment
In this paper the global agricultural land use model KLUM is coupled to an extended version of the computable general equilibrium model (CGE) GTAP in order to consistently assess the integrated impacts of climate change on global cropland allocation and its implication for economic development. The methodology is innovative as it introduces dynamic economic land-use decisions based also on the biophysical aspects of land into a state-ofthe- art CGE; it further allows the projection of resulting changes in cropland patterns on a spatially more explicit level. A convergence test and illustrative future simulations underpin the robustness and potentials of the coupled system. Reference simulations with the uncoupled models emphasize the impact and relevance of the coupling; the results of coupled and uncoupled simulations can differ by several hundred percent.Land-use change, computable general equilibrium modeling, integrated assessment, climate change
Klum@Gtap: Introducing Biophysical Aspects of Land-Use Decisions Into a General Equilibrium Model A Coupling Experiment
In this paper the global agricultural land use model KLUM is coupled to an extended version of the computable general equilibrium model (CGE) GTAP in order to consistently assess the integrated impacts of climate change on global cropland allocation and its implication for economic development. The methodology is innovative as it introduces dynamic economic land-use decisions based also on the biophysical aspects of land into a state-of-the-art CGE; it further allows the projection of resulting changes in cropland patterns on a spatially more explicit level. A convergence test and illustrative future simulations underpin the robustness and potentials of the coupled system. Reference simulations with the uncoupled models emphasize the impact and relevance of the coupling; the results of coupled and uncoupled simulations can differ by several hundred percent.Land-Use Change, Computable General Equilibrium Modeling, Integrated Assessment, Climate Change
3D-BEVIS: Bird's-Eye-View Instance Segmentation
Recent deep learning models achieve impressive results on 3D scene analysis
tasks by operating directly on unstructured point clouds. A lot of progress was
made in the field of object classification and semantic segmentation. However,
the task of instance segmentation is less explored. In this work, we present
3D-BEVIS, a deep learning framework for 3D semantic instance segmentation on
point clouds. Following the idea of previous proposal-free instance
segmentation approaches, our model learns a feature embedding and groups the
obtained feature space into semantic instances. Current point-based methods
scale linearly with the number of points by processing local sub-parts of a
scene individually. However, to perform instance segmentation by clustering,
globally consistent features are required. Therefore, we propose to combine
local point geometry with global context information from an intermediate
bird's-eye view representation.Comment: camera-ready version for GCPR '1
- …