32,050 research outputs found
Unsupervised Feature Learning by Deep Sparse Coding
In this paper, we propose a new unsupervised feature learning framework,
namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer
architecture for visual object recognition tasks. The main innovation of the
framework is that it connects the sparse-encoders from different layers by a
sparse-to-dense module. The sparse-to-dense module is a composition of a local
spatial pooling step and a low-dimensional embedding process, which takes
advantage of the spatial smoothness information in the image. As a result, the
new method is able to learn several levels of sparse representation of the
image which capture features at a variety of abstraction levels and
simultaneously preserve the spatial smoothness between the neighboring image
patches. Combining the feature representations from multiple layers, DeepSC
achieves the state-of-the-art performance on multiple object recognition tasks.Comment: 9 pages, submitted to ICL
Cross-convolutional-layer Pooling for Image Recognition
Recent studies have shown that a Deep Convolutional Neural Network (DCNN)
pretrained on a large image dataset can be used as a universal image
descriptor, and that doing so leads to impressive performance for a variety of
image classification tasks. Most of these studies adopt activations from a
single DCNN layer, usually the fully-connected layer, as the image
representation. In this paper, we proposed a novel way to extract image
representations from two consecutive convolutional layers: one layer is
utilized for local feature extraction and the other serves as guidance to pool
the extracted features. By taking different viewpoints of convolutional layers,
we further develop two schemes to realize this idea. The first one directly
uses convolutional layers from a DCNN. The second one applies the pretrained
CNN on densely sampled image regions and treats the fully-connected activations
of each image region as convolutional feature activations. We then train
another convolutional layer on top of that as the pooling-guidance
convolutional layer. By applying our method to three popular visual
classification tasks, we find our first scheme tends to perform better on the
applications which need strong discrimination on subtle object patterns within
small regions while the latter excels in the cases that require discrimination
on category-level patterns. Overall, the proposed method achieves superior
performance over existing ways of extracting image representations from a DCNN.Comment: Fixed typos. Journal extension of arXiv:1411.7466. Accepted to IEEE
Transactions on Pattern Analysis and Machine Intelligenc
Deep Contrast Learning for Salient Object Detection
Salient object detection has recently witnessed substantial progress due to
powerful features extracted using deep convolutional neural networks (CNNs).
However, existing CNN-based methods operate at the patch level instead of the
pixel level. Resulting saliency maps are typically blurry, especially near the
boundary of salient objects. Furthermore, image patches are treated as
independent samples even when they are overlapping, giving rise to significant
redundancy in computation and storage. In this CVPR 2016 paper, we propose an
end-to-end deep contrast network to overcome the aforementioned limitations.
Our deep network consists of two complementary components, a pixel-level fully
convolutional stream and a segment-wise spatial pooling stream. The first
stream directly produces a saliency map with pixel-level accuracy from an input
image. The second stream extracts segment-wise features very efficiently, and
better models saliency discontinuities along object boundaries. Finally, a
fully connected CRF model can be optionally incorporated to improve spatial
coherence and contour localization in the fused result from these two streams.
Experimental results demonstrate that our deep model significantly improves the
state of the art.Comment: To appear in CVPR 201
Do Deep Neural Networks Suffer from Crowding?
Crowding is a visual effect suffered by humans, in which an object that can
be recognized in isolation can no longer be recognized when other objects,
called flankers, are placed close to it. In this work, we study the effect of
crowding in artificial Deep Neural Networks for object recognition. We analyze
both standard deep convolutional neural networks (DCNNs) as well as a new
version of DCNNs which is 1) multi-scale and 2) with size of the convolution
filters change depending on the eccentricity wrt to the center of fixation.
Such networks, that we call eccentricity-dependent, are a computational model
of the feedforward path of the primate visual cortex. Our results reveal that
the eccentricity-dependent model, trained on target objects in isolation, can
recognize such targets in the presence of flankers, if the targets are near the
center of the image, whereas DCNNs cannot. Also, for all tested networks, when
trained on targets in isolation, we find that recognition accuracy of the
networks decreases the closer the flankers are to the target and the more
flankers there are. We find that visual similarity between the target and
flankers also plays a role and that pooling in early layers of the network
leads to more crowding. Additionally, we show that incorporating the flankers
into the images of the training set does not improve performance with crowding.Comment: CBMM mem
Improving Spatial Codification in Semantic Segmentation
This paper explores novel approaches for improving the spatial codification
for the pooling of local descriptors to solve the semantic segmentation
problem. We propose to partition the image into three regions for each object
to be described: Figure, Border and Ground. This partition aims at minimizing
the influence of the image context on the object description and vice versa by
introducing an intermediate zone around the object contour. Furthermore, we
also propose a richer visual descriptor of the object by applying a Spatial
Pyramid over the Figure region. Two novel Spatial Pyramid configurations are
explored: Cartesian-based and crown-based Spatial Pyramids. We test these
approaches with state-of-the-art techniques and show that they improve the
Figure-Ground based pooling in the Pascal VOC 2011 and 2012 semantic
segmentation challenges.Comment: Paper accepted at the IEEE International Conference on Image
Processing, ICIP 2015. Quebec City, 27-30 September. Project page:
https://imatge.upc.edu/web/publications/improving-spatial-codification-semantic-segmentatio
- …