4,572 research outputs found
Reconstructing the Forest of Lineage Trees of Diverse Bacterial Communities Using Bio-inspired Image Analysis
Cell segmentation and tracking allow us to extract a plethora of cell
attributes from bacterial time-lapse cell movies, thus promoting computational
modeling and simulation of biological processes down to the single-cell level.
However, to analyze successfully complex cell movies, imaging multiple
interacting bacterial clones as they grow and merge to generate overcrowded
bacterial communities with thousands of cells in the field of view,
segmentation results should be near perfect to warrant good tracking results.
We introduce here a fully automated closed-loop bio-inspired computational
strategy that exploits prior knowledge about the expected structure of a
colony's lineage tree to locate and correct segmentation errors in analyzed
movie frames. We show that this correction strategy is effective, resulting in
improved cell tracking and consequently trustworthy deep colony lineage trees.
Our image analysis approach has the unique capability to keep tracking cells
even after clonal subpopulations merge in the movie. This enables the
reconstruction of the complete Forest of Lineage Trees (FLT) representation of
evolving multi-clonal bacterial communities. Moreover, the percentage of valid
cell trajectories extracted from the image analysis almost doubles after
segmentation correction. This plethora of trustworthy data extracted from a
complex cell movie analysis enables single-cell analytics as a tool for
addressing compelling questions for human health, such as understanding the
role of single-cell stochasticity in antibiotics resistance without losing site
of the inter-cellular interactions and microenvironment effects that may shape
it
Road Network Reconstruction from Satellite Images with Machine Learning Supported by Topological Methods
Automatic Extraction of road network from satellite images is a goal that can
benefit and even enable new technologies. Methods that combine machine learning
(ML) and computer vision have been proposed in recent years which make the task
semi-automatic by requiring the user to provide curated training samples. The
process can be fully automatized if training samples can be produced
algorithmically. Of course, this requires a robust algorithm that can
reconstruct the road networks from satellite images reliably so that the output
can be fed as training samples. In this work, we develop such a technique by
infusing a persistence-guided discrete Morse based graph reconstruction
algorithm into ML framework.
We elucidate our contributions in two phases. First, in a semi-automatic
framework, we combine a discrete-Morse based graph reconstruction algorithm
with an existing CNN framework to segment input satellite images. We show that
this leads to reconstructions with better connectivity and less noise. Next, in
a fully automatic framework, we leverage the power of the discrete-Morse based
graph reconstruction algorithm to train a CNN from a collection of images
without labelled data and use the same algorithm to produce the final output
from the segmented images created by the trained CNN. We apply the
discrete-Morse based graph reconstruction algorithm iteratively to improve the
accuracy of the CNN. We show promising experimental results of this new
framework on datasets from SpaceNet Challenge.Comment: 26 pages, 13 figures, ACM SIGSPATIAL 201
DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation
This paper considers the task of articulated human pose estimation of
multiple people in real world images. We propose an approach that jointly
solves the tasks of detection and pose estimation: it infers the number of
persons in a scene, identifies occluded body parts, and disambiguates body
parts between people in close proximity of each other. This joint formulation
is in contrast to previous strategies, that address the problem by first
detecting people and subsequently estimating their body pose. We propose a
partitioning and labeling formulation of a set of body-part hypotheses
generated with CNN-based part detectors. Our formulation, an instance of an
integer linear program, implicitly performs non-maximum suppression on the set
of part candidates and groups them to form configurations of body parts
respecting geometric and appearance constraints. Experiments on four different
datasets demonstrate state-of-the-art results for both single person and multi
person pose estimation. Models and code available at
http://pose.mpi-inf.mpg.de.Comment: Accepted at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2016
DEEP FULLY RESIDUAL CONVOLUTIONAL NEURAL NETWORK FOR SEMANTIC IMAGE SEGMENTATION
Department of Computer Science and EngineeringThe goal of semantic image segmentation is to partition the pixels of an image into semantically meaningful parts and classifying those parts according to a predefined label set. Although object recognition
models achieved remarkable performance recently and they even surpass human???s ability to recognize
objects, but semantic segmentation models are still behind. One of the reason that makes semantic
segmentation relatively a hard problem is the image understanding at pixel level by considering global
context as oppose to object recognition. One other challenge is transferring the knowledge of an object
recognition model for the task of semantic segmentation. In this thesis, we are delineating some of the
main challenges we faced approaching semantic image segmentation with machine learning algorithms.
Our main focus was how we can use deep learning algorithms for this task since they require the
least amount of feature engineering and also it was shown that such models can be applied to large scale
datasets and exhibit remarkable performance. More precisely, we worked on a variation of convolutional
neural networks (CNN) suitable for the semantic segmentation task. We proposed a model called deep
fully residual convolutional networks (DFRCN) to tackle this problem. Utilizing residual learning makes
training of deep models feasible which ultimately leads to having a rich powerful visual representation.
Our model also benefits from skip-connections which ease the propagation of information from the
encoder module to the decoder module. This would enable our model to have less parameters in the
decoder module while it also achieves better performance. We also benchmarked the effective variation
of the proposed model on a semantic segmentation benchmark.
We first make a thorough review of current high-performance models and the problems one might
face when trying to replicate such models which mainly arose from the lack of sufficient provided
information. Then, we describe our own novel method which we called deep fully residual convolutional
network (DFRCN). We showed that our method exhibits state of the art performance on a challenging
benchmark for aerial image segmentation.clos
Learning to Find Good Correspondences
We develop a deep architecture to learn to find good correspondences for
wide-baseline stereo. Given a set of putative sparse matches and the camera
intrinsics, we train our network in an end-to-end fashion to label the
correspondences as inliers or outliers, while simultaneously using them to
recover the relative pose, as encoded by the essential matrix. Our architecture
is based on a multi-layer perceptron operating on pixel coordinates rather than
directly on the image, and is thus simple and small. We introduce a novel
normalization technique, called Context Normalization, which allows us to
process each data point separately while imbuing it with global information,
and also makes the network invariant to the order of the correspondences. Our
experiments on multiple challenging datasets demonstrate that our method is
able to drastically improve the state of the art with little training data.Comment: CVPR 2018 (Oral
Generative Face Completion
In this paper, we propose an effective face completion algorithm using a deep
generative model. Different from well-studied background completion, the face
completion task is more challenging as it often requires to generate
semantically new pixels for the missing key components (e.g., eyes and mouths)
that contain large appearance variations. Unlike existing nonparametric
algorithms that search for patches to synthesize, our algorithm directly
generates contents for missing regions based on a neural network. The model is
trained with a combination of a reconstruction loss, two adversarial losses and
a semantic parsing loss, which ensures pixel faithfulness and local-global
contents consistency. With extensive experimental results, we demonstrate
qualitatively and quantitatively that our model is able to deal with a large
area of missing pixels in arbitrary shapes and generate realistic face
completion results.Comment: Accepted by CVPR 201
Footprints and Free Space from a Single Color Image
Understanding the shape of a scene from a single color image is a formidable
computer vision task. However, most methods aim to predict the geometry of
surfaces that are visible to the camera, which is of limited use when planning
paths for robots or augmented reality agents. Such agents can only move when
grounded on a traversable surface, which we define as the set of classes which
humans can also walk over, such as grass, footpaths and pavement. Models which
predict beyond the line of sight often parameterize the scene with voxels or
meshes, which can be expensive to use in machine learning frameworks.
We introduce a model to predict the geometry of both visible and occluded
traversable surfaces, given a single RGB image as input. We learn from stereo
video sequences, using camera poses, per-frame depth and semantic segmentation
to form training data, which is used to supervise an image-to-image network. We
train models from the KITTI driving dataset, the indoor Matterport dataset, and
from our own casually captured stereo footage. We find that a surprisingly low
bar for spatial coverage of training scenes is required. We validate our
algorithm against a range of strong baselines, and include an assessment of our
predictions for a path-planning task.Comment: Accepted to CVPR 2020 as an oral presentatio
Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey
Image classification systems recently made a giant leap with the advancement
of deep neural networks. However, these systems require an excessive amount of
labeled data to be adequately trained. Gathering a correctly annotated dataset
is not always feasible due to several factors, such as the expensiveness of the
labeling process or difficulty of correctly classifying data, even for the
experts. Because of these practical challenges, label noise is a common problem
in real-world datasets, and numerous methods to train deep neural networks with
label noise are proposed in the literature. Although deep neural networks are
known to be relatively robust to label noise, their tendency to overfit data
makes them vulnerable to memorizing even random noise. Therefore, it is crucial
to consider the existence of label noise and develop counter algorithms to fade
away its adverse effects to train deep neural networks efficiently. Even though
an extensive survey of machine learning techniques under label noise exists,
the literature lacks a comprehensive survey of methodologies centered
explicitly around deep learning in the presence of noisy labels. This paper
aims to present these algorithms while categorizing them into one of the two
subgroups: noise model based and noise model free methods. Algorithms in the
first group aim to estimate the noise structure and use this information to
avoid the adverse effects of noisy labels. Differently, methods in the second
group try to come up with inherently noise robust algorithms by using
approaches like robust losses, regularizers or other learning paradigms
- âŠ