191,132 research outputs found
Going Deeper with Convolutional Neural Network for Intelligent Transportation
Over last several decades, computer vision researchers have been devoted to find good feature to solve different tasks, object recognition, object detection, object segmentation, activity recognition and so forth. Ideal features transform raw pixel intensity values to a representation in which these computer vision problems are easier to solve. Recently, deep feature from covolutional neural network(CNN) have attracted many researchers to solve many problems in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function for different tasks. More recently, the feature learned from large scale image dataset have been proved to be very effective and generic for many computer vision task. The feature learned from recognition task can be used in the object detection task. This work aims to uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. This work aims to uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. We begin by summarize some related prior works, particularly the paper in object recognition, object detection and segmentation. We introduce the deep feature to computer vision task in intelligent transportation system. First, we apply deep feature in object detection task, especially in vehicle detection task. Second, to make fully use of objectness proposals, we apply proposal generator on road marking detection and recognition task. Third, to fully understand the transportation situation, we introduce the deep feature into scene understanding in road. We experiment each task for different public datasets, and prove our framework is robust
Improving Deep Representation Learning with Complex and Multimodal Data.
Representation learning has emerged as a way to learn meaningful representation from data and made a breakthrough in many applications including visual object recognition, speech recognition, and text understanding. However, learning representation from complex high-dimensional sensory data is challenging since there exist many irrelevant factors of variation (e.g., data transformation, random noise). On the other hand, to build an end-to-end prediction system for structured output variables, one needs to incorporate probabilistic inference to properly model a mapping from single input to possible configurations of output variables. This thesis addresses limitations of current representation learning in two parts.
The first part discusses efficient learning algorithms of invariant representation based on restricted Boltzmann machines (RBMs). Pointing out the difficulty of learning, we develop an efficient initialization method for sparse and convolutional RBMs. On top of that, we develop variants of RBM that learn representations invariant to data transformations such as translation, rotation, or scale variation by pooling the filter responses of input data after a transformation, or to irrelevant patterns such as random or structured noise, by jointly performing feature selection and feature learning. We demonstrate improved performance on visual object recognition and weakly supervised foreground object segmentation.
The second part discusses conditional graphical models and learning frameworks for structured output variables using deep generative models as prior. For example, we combine the best properties of the CRF and the RBM to enforce both local and global (e.g., object shape) consistencies for visual object segmentation. Furthermore, we develop a deep conditional generative model of structured output variables, which is an end-to-end system trainable by backpropagation. We demonstrate the importance of global prior and probabilistic inference for visual object segmentation. Second, we develop a novel multimodal learning framework by casting the problem into structured output representation learning problems, where the output is one data modality to be predicted from the other modalities, and vice versa. We explain as to how our method could be more effective than maximum likelihood learning and demonstrate the state-of-the-art performance on visual-text and visual-only recognition tasks.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113549/1/kihyuks_1.pd
Self-Organization of Spiking Neural Networks for Visual Object Recognition
On one hand, the visual system has the ability to differentiate between very similar
objects. On the other hand, we can also recognize the same object in images that vary
drastically, due to different viewing angle, distance, or illumination. The ability to
recognize the same object under different viewing conditions is called invariant object
recognition. Such object recognition capabilities are not immediately available after
birth, but are acquired through learning by experience in the visual world.
In many viewing situations different views of the same object are seen in a tem-
poral sequence, e.g. when we are moving an object in our hands while watching it.
This creates temporal correlations between successive retinal projections that can be
used to associate different views of the same object. Theorists have therefore pro-
posed a synaptic plasticity rule with a built-in memory trace (trace rule).
In this dissertation I present spiking neural network models that offer possible
explanations for learning of invariant object representations. These models are based
on the following hypotheses:
1. Instead of a synaptic trace rule, persistent firing of recurrently connected groups
of neurons can serve as a memory trace for invariance learning.
2. Short-range excitatory lateral connections enable learning of self-organizing
topographic maps that represent temporal as well as spatial correlations.
3. When trained with sequences of object views, such a network can learn repre-
sentations that enable invariant object recognition by clustering different views
of the same object within a local neighborhood.
4. Learning of representations for very similar stimuli can be enabled by adaptive
inhibitory feedback connections.
The study presented in chapter 3.1 details an implementation of a spiking neural
network to test the first three hypotheses. This network was tested with stimulus
sets that were designed in two feature dimensions to separate the impact of tempo-
ral and spatial correlations on learned topographic maps. The emerging topographic
maps showed patterns that were dependent on the temporal order of object views
during training. Our results show that pooling over local neighborhoods of the to-
pographic map enables invariant recognition.
Chapter 3.2 focuses on the fourth hypothesis. There we examine how the adaptive
feedback inhibition (AFI) can improve the ability of a network to discriminate between
very similar patterns. The results show that with AFI learning is faster, and the
network learns selective representations for stimuli with higher levels of overlap
than without AFI.
Results of chapter 3.1 suggest a functional role for topographic object representa-
tions that are known to exist in the inferotemporal cortex, and suggests a mechanism
for the development of such representations. The AFI model implements one aspect
of predictive coding: subtraction of a prediction from the actual input of a system. The
successful implementation in a biologically plausible network of spiking neurons
shows that predictive coding can play a role in cortical circuits
Learning to Generate and Refine Object Proposals
Visual object recognition is a fundamental and challenging
problem in computer vision. To build a practical recognition
system, one is first confronted with high computation complexity
due to an enormous search space from an image, which is caused by
large variations in object appearance, pose and mutual occlusion,
as well as other environmental factors. To reduce the search
complexity, a moderate set of image regions that are likely to
contain an object, regardless of its category, are usually first
generated in modern object recognition subsystems. These possible
object regions are called object proposals, object hypotheses or
object candidates, which can be used for down-stream
classification or global reasoning in many different vision tasks
like object detection, segmentation and tracking, etc.
This thesis addresses the problem of object proposal generation,
including bounding box and segment proposal generation, in
real-world scenarios. In particular, we investigate the
representation learning in object proposal generation with 3D
cues and contextual information, aiming to propose higher-quality
object candidates which have higher object recall, better
boundary coverage and lower number. We focus on three main
issues: 1) how can we incorporate additional geometric and
high-level semantic context information into the proposal
generation for stereo images? 2) how do we generate object
segment proposals for stereo images with learning representations
and learning grouping process? and 3) how can we learn a
context-driven representation to refine segment proposals
efficiently?
In this thesis, we propose a series of solutions to address each
of the raised problems. We first propose a semantic context and
depth-aware object proposal generation method. We design a set of
new cues to encode the objectness, and then train an efficient
random forest classifier to re-rank the initial proposals and
linear regressors to fine-tune their locations. Next, we extend
the task to the segment proposal generation in the same setting
and develop a learning-based segment proposal generation method
for stereo images. Our method makes use of learned deep features
and designed geometric features to represent a region and learns
a similarity network to guide the superpixel grouping process. We
also learn a ranking network to predict the objectness score for
each segment proposal. To address the third problem, we take a
transformation-based approach to improve the quality of a given
segment candidate pool based on context information. We propose
an efficient deep network that learns affine transformations to
warp an initial object mask towards nearby object region, based
on a novel feature pooling strategy. Finally, we extend our
affine warping approach to address the object-mask alignment
problem and particularly the problem of refining a set of segment
proposals. We design an end-to-end deep spatial transformer
network that learns free-form deformations (FFDs) to non-rigidly
warp the shape mask towards the ground truth, based on a
multi-level dual mask feature pooling strategy. We evaluate all
our approaches on several publicly available object recognition
datasets and show superior performance
Visual pathways from the perspective of cost functions and multi-task deep neural networks
Vision research has been shaped by the seminal insight that we can understand
the higher-tier visual cortex from the perspective of multiple functional
pathways with different goals. In this paper, we try to give a computational
account of the functional organization of this system by reasoning from the
perspective of multi-task deep neural networks. Machine learning has shown that
tasks become easier to solve when they are decomposed into subtasks with their
own cost function. We hypothesize that the visual system optimizes multiple
cost functions of unrelated tasks and this causes the emergence of a ventral
pathway dedicated to vision for perception, and a dorsal pathway dedicated to
vision for action. To evaluate the functional organization in multi-task deep
neural networks, we propose a method that measures the contribution of a unit
towards each task, applying it to two networks that have been trained on either
two related or two unrelated tasks, using an identical stimulus set. Results
show that the network trained on the unrelated tasks shows a decreasing degree
of feature representation sharing towards higher-tier layers while the network
trained on related tasks uniformly shows high degree of sharing. We conjecture
that the method we propose can be used to analyze the anatomical and functional
organization of the visual system and beyond. We predict that the degree to
which tasks are related is a good descriptor of the degree to which they share
downstream cortical-units.Comment: 16 pages, 5 figure
The Neural Representation Benchmark and its Evaluation on Brain and Machine
A key requirement for the development of effective learning representations
is their evaluation and comparison to representations we know to be effective.
In natural sensory domains, the community has viewed the brain as a source of
inspiration and as an implicit benchmark for success. However, it has not been
possible to directly test representational learning algorithms directly against
the representations contained in neural systems. Here, we propose a new
benchmark for visual representations on which we have directly tested the
neural representation in multiple visual cortical areas in macaque (utilizing
data from [Majaj et al., 2012]), and on which any computer vision algorithm
that produces a feature space can be tested. The benchmark measures the
effectiveness of the neural or machine representation by computing the
classification loss on the ordered eigendecomposition of a kernel matrix
[Montavon et al., 2011]. In our analysis we find that the neural representation
in visual area IT is superior to visual area V4. In our analysis of
representational learning algorithms, we find that three-layer models approach
the representational performance of V4 and the algorithm in [Le et al., 2012]
surpasses the performance of V4. Impressively, we find that a recent supervised
algorithm [Krizhevsky et al., 2012] achieves performance comparable to that of
IT for an intermediate level of image variation difficulty, and surpasses IT at
a higher difficulty level. We believe this result represents a major milestone:
it is the first learning algorithm we have found that exceeds our current
estimate of IT representation performance. We hope that this benchmark will
assist the community in matching the representational performance of visual
cortex and will serve as an initial rallying point for further correspondence
between representations derived in brains and machines.Comment: The v1 version contained incorrectly computed kernel analysis curves
and KA-AUC values for V4, IT, and the HT-L3 models. They have been corrected
in this versio
ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids
We introduce an unsupervised feature learning approach that embeds 3D shape
information into a single-view image representation. The main idea is a
self-supervised training objective that, given only a single 2D image, requires
all unseen views of the object to be predictable from learned features. We
implement this idea as an encoder-decoder convolutional neural network. The
network maps an input image of an unknown category and unknown viewpoint to a
latent space, from which a deconvolutional decoder can best "lift" the image to
its complete viewgrid showing the object from all viewing angles. Our
class-agnostic training procedure encourages the representation to capture
fundamental shape primitives and semantic regularities in a data-driven
manner---without manual semantic labels. Our results on two widely-used shape
datasets show 1) our approach successfully learns to perform "mental rotation"
even for objects unseen during training, and 2) the learned latent space is a
powerful representation for object recognition, outperforming several existing
unsupervised feature learning methods.Comment: To appear at ECCV 201
- …