482 research outputs found
Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans
The magnetic resonance (MR) analysis of brain tumors is widely used for
diagnosis and examination of tumor subregions. The overlapping area among the
intensity distribution of healthy, enhancing, non-enhancing, and edema regions
makes the automatic segmentation a challenging task. Here, we show that a
convolutional neural network trained on high-contrast images can transform the
intensity distribution of brain lesions in its internal subregions.
Specifically, a generative adversarial network (GAN) is extended to synthesize
high-contrast images. A comparison of these synthetic images and real images of
brain tumor tissue in MR scans showed significant segmentation improvement and
decreased the number of real channels for segmentation. The synthetic images
are used as a substitute for real channels and can bypass real modalities in
the multimodal brain tumor segmentation framework. Segmentation results on
BraTS 2019 dataset demonstrate that our proposed approach can efficiently
segment the tumor areas. In the end, we predict patient survival time based on
volumetric features of the tumor subregions as well as the age of each case
through several regression models
Massively Parallel Video Networks
We introduce a class of causal video understanding models that aims to
improve efficiency of video processing by maximising throughput, minimising
latency, and reducing the number of clock cycles. Leveraging operation
pipelining and multi-rate clocks, these models perform a minimal amount of
computation (e.g. as few as four convolutional layers) for each frame per
timestep to produce an output. The models are still very deep, with dozens of
such operations being performed but in a pipelined fashion that enables
depth-parallel computation. We illustrate the proposed principles by applying
them to existing image architectures and analyse their behaviour on two video
tasks: action recognition and human keypoint localisation. The results show
that a significant degree of parallelism, and implicitly speedup, can be
achieved with little loss in performance.Comment: Fixed typos in densenet model definition in appendi
Intelligent Data Networking for the Earth System Science Community
Earth system science (ESS) research is generally very data intense. To enable detailed discovery and transparent access of the data stored in heterogeneous and organisationally separated data centres common data and metadata community interfaces are needed. This paper describes the development of a coherent data discovery and data access infrastructure for the ESS community in Germany. To comprehensively and consistently describe the characteristics of geographic data, required for their discovery (discovery metadata) and for their usage (use metadata) the ISO standard 19115 is adopted. Webservice technology is used to hide the details of heterogeneous data access mechanisms and preprocessing implementations. The commitment to international standards and the modular character of the approach facilitates the expandability of the infrastructure as well as the interoperability with international partners and other communities
Improving Whole Slide Segmentation Through Visual Context - A Systematic Study
While challenging, the dense segmentation of histology images is a necessary
first step to assess changes in tissue architecture and cellular morphology.
Although specific convolutional neural network architectures have been applied
with great success to the problem, few effectively incorporate visual context
information from multiple scales. With this paper, we present a systematic
comparison of different architectures to assess how including multi-scale
information affects segmentation performance. A publicly available breast
cancer and a locally collected prostate cancer datasets are being utilised for
this study. The results support our hypothesis that visual context and scale
play a crucial role in histology image classification problems
End-to-End Boundary Aware Networks for Medical Image Segmentation
Fully convolutional neural networks (CNNs) have proven to be effective at
representing and classifying textural information, thus transforming image
intensity into output class masks that achieve semantic image segmentation. In
medical image analysis, however, expert manual segmentation often relies on the
boundaries of anatomical structures of interest. We propose boundary aware CNNs
for medical image segmentation. Our networks are designed to account for organ
boundary information, both by providing a special network edge branch and
edge-aware loss terms, and they are trainable end-to-end. We validate their
effectiveness on the task of brain tumor segmentation using the BraTS 2018
dataset. Our experiments reveal that our approach yields more accurate
segmentation results, which makes it promising for more extensive application
to medical image segmentation.Comment: Accepted to MICCAI Machine Learning in Medical Imaging (MLMI 2019
Synaptic partner prediction from point annotations in insect brains
High-throughput electron microscopy allows recording of lar- ge stacks of
neural tissue with sufficient resolution to extract the wiring diagram of the
underlying neural network. Current efforts to automate this process focus
mainly on the segmentation of neurons. However, in order to recover a wiring
diagram, synaptic partners need to be identi- fied as well. This is especially
challenging in insect brains like Drosophila melanogaster, where one
presynaptic site is associated with multiple post- synaptic elements. Here we
propose a 3D U-Net architecture to directly identify pairs of voxels that are
pre- and postsynaptic to each other. To that end, we formulate the problem of
synaptic partner identification as a classification problem on long-range edges
between voxels to encode both the presence of a synaptic pair and its
direction. This formulation allows us to directly learn from synaptic point
annotations instead of more ex- pensive voxel-based synaptic cleft or vesicle
annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and
improve over the current state of the art, producing 3% fewer errors than the
next best method
TuNet: End-to-end Hierarchical Brain Tumor Segmentation using Cascaded Networks
Glioma is one of the most common types of brain tumors; it arises in the
glial cells in the human brain and in the spinal cord. In addition to having a
high mortality rate, glioma treatment is also very expensive. Hence, automatic
and accurate segmentation and measurement from the early stages are critical in
order to prolong the survival rates of the patients and to reduce the costs of
the treatment. In the present work, we propose a novel end-to-end cascaded
network for semantic segmentation that utilizes the hierarchical structure of
the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation
modules after each convolution and concatenation block. By utilizing
cross-validation, an average ensemble technique, and a simple post-processing
technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff
Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor
core, and enhancing tumor, respectively, on the online test set.Comment: Accepted at MICCAI BrainLes 201
Improving the Segmentation of Anatomical Structures in Chest Radiographs using U-Net with an ImageNet Pre-trained Encoder
Accurate segmentation of anatomical structures in chest radiographs is
essential for many computer-aided diagnosis tasks. In this paper we investigate
the latest fully-convolutional architectures for the task of multi-class
segmentation of the lungs field, heart and clavicles in a chest radiograph. In
addition, we explore the influence of using different loss functions in the
training process of a neural network for semantic segmentation. We evaluate all
models on a common benchmark of 247 X-ray images from the JSRT database and
ground-truth segmentation masks from the SCR dataset. Our best performing
architecture, is a modified U-Net that benefits from pre-trained encoder
weights. This model outperformed the current state-of-the-art methods tested on
the same benchmark, with Jaccard overlap scores of 96.1% for lung fields, 90.6%
for heart and 85.5% for clavicles.Comment: Presented at the First International Workshop on Thoracic Image
Analysis (TIA), MICCAI 201
Unsupervised Holistic Image Generation from Key Local Patches
We introduce a new problem of generating an image based on a small number of
key local patches without any geometric prior. In this work, key local patches
are defined as informative regions of the target object or scene. This is a
challenging problem since it requires generating realistic images and
predicting locations of parts at the same time. We construct adversarial
networks to tackle this problem. A generator network generates a fake image as
well as a mask based on the encoder-decoder framework. On the other hand, a
discriminator network aims to detect fake images. The network is trained with
three losses to consider spatial, appearance, and adversarial information. The
spatial loss determines whether the locations of predicted parts are correct.
Input patches are restored in the output image without much modification due to
the appearance loss. The adversarial loss ensures output images are realistic.
The proposed network is trained without supervisory signals since no labels of
key parts are required. Experimental results on six datasets demonstrate that
the proposed algorithm performs favorably on challenging objects and scenes.Comment: 16 page
- …