273 research outputs found
Microscopy Cell Segmentation via Convolutional LSTM Networks
Live cell microscopy sequences exhibit complex spatial structures and
complicated temporal behaviour, making their analysis a challenging task.
Considering cell segmentation problem, which plays a significant role in the
analysis, the spatial properties of the data can be captured using
Convolutional Neural Networks (CNNs). Recent approaches show promising
segmentation results using convolutional encoder-decoders such as the U-Net.
Nevertheless, these methods are limited by their inability to incorporate
temporal information, that can facilitate segmentation of individual touching
cells or of cells that are partially visible. In order to exploit cell dynamics
we propose a novel segmentation architecture which integrates Convolutional
Long Short Term Memory (C-LSTM) with the U-Net. The network's unique
architecture allows it to capture multi-scale, compact, spatio-temporal
encoding in the C-LSTMs memory units. The method was evaluated on the Cell
Tracking Challenge and achieved state-of-the-art results (1st on Fluo-N2DH-SIM+
and 2nd on DIC-C2DL-HeLa datasets) The code is freely available at:
https://github.com/arbellea/LSTM-UNet.gitComment: Accepted to ISBI 201
Extracting 3D Vascular Structures from Microscopy Images using Convolutional Recurrent Networks
Vasculature is known to be of key biological significance, especially in the
study of cancer. As such, considerable effort has been focused on the automated
measurement and analysis of vasculature in medical and pre-clinical images. In
tumors in particular, the vascular networks may be extremely irregular and the
appearance of the individual vessels may not conform to classical descriptions
of vascular appearance. Typically, vessels are extracted by either a
segmentation and thinning pipeline, or by direct tracking. Neither of these
methods are well suited to microscopy images of tumor vasculature. In order to
address this we propose a method to directly extract a medial representation of
the vessels using Convolutional Neural Networks. We then show that these
two-dimensional centerlines can be meaningfully extended into 3D in anisotropic
and complex microscopy images using the recently popularized Convolutional Long
Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this
hybrid convolutional-recurrent architecture over both 2D and 3D convolutional
comparators.Comment: The article has been submitted to IEEE TM
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Conditional Generative Refinement Adversarial Networks for Unbalanced Medical Image Semantic Segmentation
We propose a new generative adversarial architecture to mitigate imbalance
data problem in medical image semantic segmentation where the majority of
pixels belongs to a healthy region and few belong to lesion or non-health
region. A model trained with imbalanced data tends to bias toward healthy data
which is not desired in clinical applications and predicted outputs by these
networks have high precision and low sensitivity. We propose a new conditional
generative refinement network with three components: a generative, a
discriminative, and a refinement network to mitigate unbalanced data problem
through ensemble learning. The generative network learns to a segment at the
pixel level by getting feedback from the discriminative network according to
the true positive and true negative maps. On the other hand, the refinement
network learns to predict the false positive and the false negative masks
produced by the generative network that has significant value, especially in
medical application. The final semantic segmentation masks are then composed by
the output of the three networks. The proposed architecture shows
state-of-the-art results on LiTS-2017 for liver lesion segmentation, and two
microscopic cell segmentation datasets MDA231, PhC-HeLa. We have achieved
competitive results on BraTS-2017 for brain tumour segmentation
Instance Segmentation of Biological Images Using Harmonic Embeddings
We present a new instance segmentation approach tailored to biological
images, where instances may correspond to individual cells, organisms or plant
parts. Unlike instance segmentation for user photographs or road scenes, in
biological data object instances may be particularly densely packed, the
appearance variation may be particularly low, the processing power may be
restricted, while, on the other hand, the variability of sizes of individual
instances may be limited. The proposed approach successfully addresses these
peculiarities.
Our approach describes each object instance using an expectation of a limited
number of sine waves with frequencies and phases adjusted to particular object
sizes and densities. At train time, a fully-convolutional network is learned to
predict the object embeddings at each pixel using a simple pixelwise regression
loss, while at test time the instances are recovered using clustering in the
embedding space. In the experiments, we show that our approach outperforms
previous embedding-based instance segmentation approaches on a number of
biological datasets, achieving state-of-the-art on a popular CVPPP benchmark.
This excellent performance is combined with computational efficiency that is
needed for deployment to domain specialists.
The source code of the approach is available at
https://github.com/kulikovv/harmonicComment: Accepted as oral to CVPR 202
Automating assessment of human embryo images and time-lapse sequences for IVF treatment
As the number of couples using In Vitro Fertilization (IVF) treatment to give birth increases, so too does the need for robust tools to assist embryologists in selecting the highest quality embryos for implantation. Quality scores assigned to embryonic structures are critical markers for predicting implantation potential of human blastocyst-stage embryos. Timing at which embryos reach certain cell and development stages in vitro also provides valuable information about their development progress and potential to become a positive pregnancy. The current workflow of grading blastocysts by visual assessment is susceptible to subjectivity between embryologists. Visually verifying when embryo cell stage increases is tedious and confirming onset of later development stages is also prone to subjective assessment. This thesis proposes methods to automate embryo image and time-lapse sequence assessment to provide objective evaluation of blastocyst structure quality, cell counting, and timing of development stages
Instance Segmentation by Deep Coloring
We propose a new and, arguably, a very simple reduction of instance
segmentation to semantic segmentation. This reduction allows to train
feed-forward non-recurrent deep instance segmentation systems in an end-to-end
fashion using architectures that have been proposed for semantic segmentation.
Our approach proceeds by introducing a fixed number of labels (colors) and then
dynamically assigning object instances to those labels during training
(coloring). A standard semantic segmentation objective is then used to train a
network that can color previously unseen images. At test time, individual
object instances can be recovered from the output of the trained convolutional
network using simple connected component analysis. In the experimental
validation, the coloring approach is shown to be capable of solving diverse
instance segmentation tasks arising in autonomous driving (the Cityscapes
benchmark), plant phenotyping (the CVPPP leaf segmentation challenge), and
high-throughput microscopy image analysis.
The source code is publicly available:
https://github.com/kulikovv/DeepColoring.Comment: 10 pages, 6 figures, 3 table
Spatial-Temporal Mitosis Detection in Phase-Contrast Microscopy via Likelihood Map Estimation by 3DCNN
Automated mitotic detection in time-lapse phasecontrast microscopy provides
us much information for cell behavior analysis, and thus several mitosis
detection methods have been proposed. However, these methods still have two
problems; 1) they cannot detect multiple mitosis events when there are closely
placed. 2) they do not consider the annotation gaps, which may occur since the
appearances of mitosis cells are very similar before and after the annotated
frame. In this paper, we propose a novel mitosis detection method that can
detect multiple mitosis events in a candidate sequence and mitigate the human
annotation gap via estimating a spatiotemporal likelihood map by 3DCNN. In this
training, the loss gradually decreases with the gap size between ground truth
and estimation. This mitigates the annotation gaps. Our method outperformed the
compared methods in terms of F1- score using a challenging dataset that
contains the data under four different conditions.Comment: 5 pages, 6 figures, Accepted in EMBC 202
A Survey on Deep Learning-based Architectures for Semantic Segmentation on 2D images
Semantic segmentation is the pixel-wise labelling of an image. Since the
problem is defined at the pixel level, determining image class labels only is
not acceptable, but localising them at the original image pixel resolution is
necessary. Boosted by the extraordinary ability of convolutional neural
networks (CNN) in creating semantic, high level and hierarchical image
features; excessive numbers of deep learning-based 2D semantic segmentation
approaches have been proposed within the last decade. In this survey, we mainly
focus on the recent scientific developments in semantic segmentation,
specifically on deep learning-based methods using 2D images. We started with an
analysis of the public image sets and leaderboards for 2D semantic
segmantation, with an overview of the techniques employed in performance
evaluation. In examining the evolution of the field, we chronologically
categorised the approaches into three main periods, namely pre-and early deep
learning era, the fully convolutional era, and the post-FCN era. We technically
analysed the solutions put forward in terms of solving the fundamental problems
of the field, such as fine-grained localisation and scale invariance. Before
drawing our conclusions, we present a table of methods from all mentioned eras,
with a brief summary of each approach that explains their contribution to the
field. We conclude the survey by discussing the current challenges of the field
and to what extent they have been solved.Comment: Updated with new studie
Semi-supervised estimation of event temporal length for cell event detection
Cell event detection in cell videos is essential for monitoring of cellular
behavior over extended time periods. Deep learning methods have shown great
success in the detection of cell events for their ability to capture more
discriminative features of cellular processes compared to traditional methods.
In particular, convolutional long short-term memory (LSTM) models, which
exploits the changes in cell events observable in video sequences, is the
state-of-the-art for mitosis detection in cell videos. However, their
limitations are the determination of the input sequence length, which is often
performed empirically, and the need for a large annotated training dataset
which is expensive to prepare. We propose a novel semi-supervised method of
optimal length detection for mitosis detection with two key contributions: (i)
an unsupervised step for learning the spatial and temporal locations of cells
in their normal stage and approximating the distribution of temporal lengths of
cell events and, (ii) a step of inferring, from that distribution, an optimal
input sequence length and a minimal number of annotated frames for training a
LSTM model for each particular video. We evaluated our method in detecting
mitosis in densely packed stem cells in a phase-contrast microscopy videos. Our
experimental data prove that increasing the input sequence length of LSTM can
lead to a decrease in performance. Our results also show that by approximating
the optimal input sequence length of the tested video, a model trained with
only 18 annotated frames achieved F1-scores of 0.880-0.907, which are 10%
higher than those of other published methods with a full set of 110 training
annotated frames
- …