24 research outputs found
A deep level set method for image segmentation
This paper proposes a novel image segmentation approachthat integrates fully
convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the
integrated method can incorporatesmoothing and prior information to achieve an
accurate segmentation.Furthermore, different than using the level set model as
a post-processingtool, we integrate it into the training phase to fine-tune the
FCN. Thisallows the use of unlabeled data during training in a
semi-supervisedsetting. Using two types of medical imaging data (liver CT and
left ven-tricle MRI data), we show that the integrated method achieves
goodperformance even when little training data is available, outperformingthe
FCN or the level set model alone
Neuron Segmentation Using Deep Complete Bipartite Networks
In this paper, we consider the problem of automatically segmenting neuronal
cells in dual-color confocal microscopy images. This problem is a key task in
various quantitative analysis applications in neuroscience, such as tracing
cell genesis in Danio rerio (zebrafish) brains. Deep learning, especially using
fully convolutional networks (FCN), has profoundly changed segmentation
research in biomedical imaging. We face two major challenges in this problem.
First, neuronal cells may form dense clusters, making it difficult to correctly
identify all individual cells (even to human experts). Consequently,
segmentation results of the known FCN-type models are not accurate enough.
Second, pixel-wise ground truth is difficult to obtain. Only a limited amount
of approximate instance-wise annotation can be collected, which makes the
training of FCN models quite cumbersome. We propose a new FCN-type deep
learning model, called deep complete bipartite networks (CB-Net), and a new
scheme for leveraging approximate instance-wise annotation to train our
pixel-wise prediction model. Evaluated using seven real datasets, our proposed
new CB-Net model outperforms the state-of-the-art FCN models and produces
neuron segmentation results of remarkable qualityComment: miccai 201
Hierarchical multi-class segmentation of glioma images using networks with multi-level activation function
For many segmentation tasks, especially for the biomedical image, the
topological prior is vital information which is useful to exploit. The
containment/nesting is a typical inter-class geometric relationship. In the
MICCAI Brain tumor segmentation challenge, with its three hierarchically nested
classes 'whole tumor', 'tumor core', 'active tumor', the nested classes
relationship is introduced into the 3D-residual-Unet architecture. The network
comprises a context aggregation pathway and a localization pathway, which
encodes increasingly abstract representation of the input as going deeper into
the network, and then recombines these representations with shallower features
to precisely localize the interest domain via a localization path. The
nested-class-prior is combined by proposing the multi-class activation function
and its corresponding loss function. The model is trained on the training
dataset of Brats2018, and 20% of the dataset is regarded as the validation
dataset to determine parameters. When the parameters are fixed, we retrain the
model on the whole training dataset. The performance achieved on the validation
leaderboard is 86%, 77% and 72% Dice scores for the whole tumor, enhancing
tumor and tumor core classes without relying on ensembles or complicated
post-processing steps. Based on the same start-of-the-art network architecture,
the accuracy of nested-class (enhancing tumor) is reasonably improved from 69%
to 72% compared with the traditional Softmax-based method which blind to
topological prior.Comment: 12pages first versio
Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation
The automated segmentation of regions of interest (ROIs) in medical imaging is the fundamental requirement for the derivation of high-level semantics for image analysis in clinical decision support systems. Traditional segmentation approaches such as region-based depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, methods based on fully convolutional networks (FCN) have achieved great success in the segmentation of general images. FCNs leverage a large labeled dataset to hierarchically learn the features that best correspond to the shallow appearance as well as the deep semantics of the images. However, when applied to medical images, FCNs usually produce coarse ROI detection and poor boundary definitions primarily due to the limited number of labeled training data and limited constraints of label agreement among neighboring similar pixels. In this paper, we propose a new stacked FCN architecture with multi-channel learning (SFCN-ML). We embed the FCN in a stacked architecture to learn the foreground ROI features and background non-ROI features separately and then integrate these different channels to produce the final segmentation result. In contrast to traditional FCN methods, our SFCN-ML architecture enables the visual attributes and semantics derived from both the fore- and background channels to be iteratively learned and inferred. We conducted extensive experiments on three public datasets with a variety of visual challenges. Our results show that our SFCN-ML is more effective and robust than a routine FCN and its variants, and other state-of-the-art methods