2,796 research outputs found
Class-Weighted Convolutional Features for Visual Instance Search
Image retrieval in realistic scenarios targets large dynamic datasets of
unlabeled images. In these cases, training or fine-tuning a model every time
new images are added to the database is neither efficient nor scalable.
Convolutional neural networks trained for image classification over large
datasets have been proven effective feature extractors for image retrieval. The
most successful approaches are based on encoding the activations of
convolutional layers, as they convey the image spatial information. In this
paper, we go beyond this spatial information and propose a local-aware encoding
of convolutional features based on semantic information predicted in the target
image. To this end, we obtain the most discriminative regions of an image using
Class Activation Maps (CAMs). CAMs are based on the knowledge contained in the
network and therefore, our approach, has the additional advantage of not
requiring external information. In addition, we use CAMs to generate object
proposals during an unsupervised re-ranking stage after a first fast search.
Our experiments on two public available datasets for instance retrieval,
Oxford5k and Paris6k, demonstrate the competitiveness of our approach
outperforming the current state-of-the-art when using off-the-shelf models
trained on ImageNet. The source code and model used in this paper are publicly
available at http://imatge-upc.github.io/retrieval-2017-cam/.Comment: To appear in the British Machine Vision Conference (BMVC), September
201
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Online Multi-Stage Deep Architectures for Feature Extraction and Object Recognition
Multi-stage visual architectures have recently found success in achieving high classification accuracies over image datasets with large variations in pose, lighting, and scale. Inspired by techniques currently at the forefront of deep learning, such architectures are typically composed of one or more layers of preprocessing, feature encoding, and pooling to extract features from raw images. Training these components traditionally relies on large sets of patches that are extracted from a potentially large image dataset. In this context, high-dimensional feature space representations are often helpful for obtaining the best classification performances and providing a higher degree of invariance to object transformations. Large datasets with high-dimensional features complicate the implementation of visual architectures in memory constrained environments. This dissertation constructs online learning replacements for the components within a multi-stage architecture and demonstrates that the proposed replacements (namely fuzzy competitive clustering, an incremental covariance estimator, and multi-layer neural network) can offer performance competitive with their offline batch counterparts while providing a reduced memory footprint. The online nature of this solution allows for the development of a method for adjusting parameters within the architecture via stochastic gradient descent. Testing over multiple datasets shows the potential benefits of this methodology when appropriate priors on the initial parameters are unknown. Alternatives to batch based decompositions for a whitening preprocessing stage which take advantage of natural image statistics and allow simple dictionary learners to work well in the problem domain are also explored. Expansions of the architecture using additional pooling statistics and multiple layers are presented and indicate that larger codebook sizes are not the only step forward to higher classification accuracies. Experimental results from these expansions further indicate the important role of sparsity and appropriate encodings within multi-stage visual feature extraction architectures
Towards Agile Text Classifiers for Everyone
Text-based safety classifiers are widely used for content moderation and
increasingly to tune generative language model behavior - a topic of growing
concern for the safety of digital assistants and chatbots. However, different
policies require different classifiers, and safety policies themselves improve
from iteration and adaptation. This paper introduces and evaluates methods for
agile text classification, whereby classifiers are trained using small,
targeted datasets that can be quickly developed for a particular policy.
Experimenting with 7 datasets from three safety-related domains, comprising 15
annotation schemes, led to our key finding: prompt-tuning large language
models, like PaLM 62B, with a labeled dataset of as few as 80 examples can
achieve state-of-the-art performance. We argue that this enables a paradigm
shift for text classification, especially for models supporting safer online
discourse. Instead of collecting millions of examples to attempt to create
universal safety classifiers over months or years, classifiers could be tuned
using small datasets, created by individuals or small organizations, tailored
for specific use cases, and iterated on and adapted in the time-span of a day.Comment: Findings of EMNLP 202
Learning Finer-class Networks for Universal Representations
Many real-world visual recognition use-cases can not directly benefit from
state-of-the-art CNN-based approaches because of the lack of many annotated
data. The usual approach to deal with this is to transfer a representation
pre-learned on a large annotated source-task onto a target-task of interest.
This raises the question of how well the original representation is
"universal", that is to say directly adapted to many different target-tasks. To
improve such universality, the state-of-the-art consists in training networks
on a diversified source problem, that is modified either by adding generic or
specific categories to the initial set of categories. In this vein, we proposed
a method that exploits finer-classes than the most specific ones existing, for
which no annotation is available. We rely on unsupervised learning and a
bottom-up split and merge strategy. We show that our method learns more
universal representations than state-of-the-art, leading to significantly
better results on 10 target-tasks from multiple domains, using several network
architectures, either alone or combined with networks learned at a coarser
semantic level.Comment: British Machine Vision Conference (BMVC) 201
- …