1,724 research outputs found
On the representativeness of convolutional neural networks layers
Convolutional Neural Networks (CNN) are the most popular of deep network models due to their applicability and success in image processing. Although plenty of effort has been made in designing and training better discriminative CNNs, little is yet known about the internal features these models learn. Questions like, what specific knowledge is coded within CNN layers, and how can it be used for other purposes besides discrimination, remain to be answered. To advance in the resolution of these questions, in this work we extract features from CNN layers, building vector representations from CNN activations. The resultant vector embedding is used to represent first images and then known image classes. On those representations we perform an unsupervised clustering process, with the goal of studying the hidden semantics captured in the embedding space. Several abstract entities untaught to the network emerge in this process, effectively defining a taxonomy of knowledge as perceived by the CNN. We evaluate and interpret these sets using WordNet, while studying the different behaviours exhibited by the layers of a CNN model according to their depth. Our results indicate that, while top (i.e., deeper) layers provide the most representative space, low layers also define descriptive dimensions.This work was partially supported by the IBM/BSC Technology Center for Supercomputing (Joint Study Agreement, No. W156463), by the Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316-P project and by the Generalitat de Catalunya (contracts
2014-SGR-1051).Peer ReviewedPostprint (author's final draft
User Constrained Thumbnail Generation using Adaptive Convolutions
Thumbnails are widely used all over the world as a preview for digital
images. In this work we propose a deep neural framework to generate thumbnails
of any size and aspect ratio, even for unseen values during training, with high
accuracy and precision. We use Global Context Aggregation (GCA) and a modified
Region Proposal Network (RPN) with adaptive convolutions to generate thumbnails
in real time. GCA is used to selectively attend and aggregate the global
context information from the entire image while the RPN is used to predict
candidate bounding boxes for the thumbnail image. Adaptive convolution
eliminates the problem of generating thumbnails of various aspect ratios by
using filter weights dynamically generated from the aspect ratio information.
The experimental results indicate the superior performance of the proposed
model over existing state-of-the-art techniques.Comment: International Conference on Acoustics, Speech, and Signal
Processing(ICASSP), 201
Deep Active Learning for Named Entity Recognition
Deep learning has yielded state-of-the-art performance on many natural
language processing tasks including named entity recognition (NER). However,
this typically requires large amounts of labeled data. In this work, we
demonstrate that the amount of labeled training data can be drastically reduced
when deep learning is combined with active learning. While active learning is
sample-efficient, it can be computationally expensive since it requires
iterative retraining. To speed this up, we introduce a lightweight architecture
for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and
word encoders and a long short term memory (LSTM) tag decoder. The model
achieves nearly state-of-the-art performance on standard datasets for the task
while being computationally much more efficient than best performing models. We
carry out incremental active learning, during the training process, and are
able to nearly match state-of-the-art performance with just 25\% of the
original training data
An accurate retrieval through R-MAC+ descriptors for landmark recognition
The landmark recognition problem is far from being solved, but with the use
of features extracted from intermediate layers of Convolutional Neural Networks
(CNNs), excellent results have been obtained. In this work, we propose some
improvements on the creation of R-MAC descriptors in order to make the
newly-proposed R-MAC+ descriptors more representative than the previous ones.
However, the main contribution of this paper is a novel retrieval technique,
that exploits the fine representativeness of the MAC descriptors of the
database images. Using this descriptors called "db regions" during the
retrieval stage, the performance is greatly improved. The proposed method is
tested on different public datasets: Oxford5k, Paris6k and Holidays. It
outperforms the state-of-the- art results on Holidays and reached excellent
results on Oxford5k and Paris6k, overcame only by approaches based on
fine-tuning strategies
Fast-AT: Fast Automatic Thumbnail Generation using Deep Neural Networks
Fast-AT is an automatic thumbnail generation system based on deep neural
networks. It is a fully-convolutional deep neural network, which learns
specific filters for thumbnails of different sizes and aspect ratios. During
inference, the appropriate filter is selected depending on the dimensions of
the target thumbnail. Unlike most previous work, Fast-AT does not utilize
saliency but addresses the problem directly. In addition, it eliminates the
need to conduct region search on the saliency map. The model generalizes to
thumbnails of different sizes including those with extreme aspect ratios and
can generate thumbnails in real time. A data set of more than 70,000 thumbnail
annotations was collected to train Fast-AT. We show competitive results in
comparison to existing techniques
Mid-level Deep Pattern Mining
Mid-level visual element discovery aims to find clusters of image patches
that are both representative and discriminative. In this work, we study this
problem from the prospective of pattern mining while relying on the recently
popularized Convolutional Neural Networks (CNNs). Specifically, we find that
for an image patch, activations extracted from the first fully-connected layer
of CNNs have two appealing properties which enable its seamless integration
with pattern mining. Patterns are then discovered from a large number of CNN
activations of image patches through the well-known association rule mining.
When we retrieve and visualize image patches with the same pattern,
surprisingly, they are not only visually similar but also semantically
consistent. We apply our approach to scene and object classification tasks, and
demonstrate that our approach outperforms all previous works on mid-level
visual element discovery by a sizeable margin with far fewer elements being
used. Our approach also outperforms or matches recent works using CNN for these
tasks. Source code of the complete system is available online.Comment: Published in Proc. IEEE Conf. Computer Vision and Pattern Recognition
201
Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation
Image segmentation is a fundamental problem in biomedical image analysis.
Recent advances in deep learning have achieved promising results on many
biomedical image segmentation benchmarks. However, due to large variations in
biomedical images (different modalities, image settings, objects, noise, etc),
to utilize deep learning on a new application, it usually needs a new set of
training data. This can incur a great deal of annotation effort and cost,
because only biomedical experts can annotate effectively, and often there are
too many instances in images (e.g., cells) to annotate. In this paper, we aim
to address the following question: With limited effort (e.g., time) for
annotation, what instances should be annotated in order to attain the best
performance? We present a deep active learning framework that combines fully
convolutional network (FCN) and active learning to significantly reduce
annotation effort by making judicious suggestions on the most effective
annotation areas. We utilize uncertainty and similarity information provided by
FCN and formulate a generalized version of the maximum set cover problem to
determine the most representative and uncertain areas for annotation. Extensive
experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node
ultrasound image segmentation dataset show that, using annotation suggestions
by our method, state-of-the-art segmentation performance can be achieved by
using only 50% of training data.Comment: Accepted at MICCAI 201
- …