1,250 research outputs found
Deep Divergence-Based Approach to Clustering
A promising direction in deep learning research consists in learning
representations and simultaneously discovering cluster structure in unlabeled
data by optimizing a discriminative loss function. As opposed to supervised
deep learning, this line of research is in its infancy, and how to design and
optimize suitable loss functions to train deep neural networks for clustering
is still an open question. Our contribution to this emerging field is a new
deep clustering network that leverages the discriminative power of
information-theoretic divergence measures, which have been shown to be
effective in traditional clustering. We propose a novel loss function that
incorporates geometric regularization constraints, thus avoiding degenerate
structures of the resulting clustering partition. Experiments on synthetic
benchmarks and real datasets show that the proposed network achieves
competitive performance with respect to other state-of-the-art methods, scales
well to large datasets, and does not require pre-training steps
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Deep neural networks (DNNs) have recently been achieving state-of-the-art
performance on a variety of pattern-recognition tasks, most notably visual
classification problems. Given that DNNs are now able to classify objects in
images with near-human-level performance, questions naturally arise as to what
differences remain between computer and human vision. A recent study revealed
that changing an image (e.g. of a lion) in a way imperceptible to humans can
cause a DNN to label the image as something else entirely (e.g. mislabeling a
lion a library). Here we show a related result: it is easy to produce images
that are completely unrecognizable to humans, but that state-of-the-art DNNs
believe to be recognizable objects with 99.99% confidence (e.g. labeling with
certainty that white noise static is a lion). Specifically, we take
convolutional neural networks trained to perform well on either the ImageNet or
MNIST datasets and then find images with evolutionary algorithms or gradient
ascent that DNNs label with high confidence as belonging to each dataset class.
It is possible to produce images totally unrecognizable to human eyes that DNNs
believe with near certainty are familiar objects, which we call "fooling
images" (more generally, fooling examples). Our results shed light on
interesting differences between human vision and current DNNs, and raise
questions about the generality of DNN computer vision.Comment: To appear at CVPR 201
Input Fast-Forwarding for Better Deep Learning
This paper introduces a new architectural framework, known as input
fast-forwarding, that can enhance the performance of deep networks. The main
idea is to incorporate a parallel path that sends representations of input
values forward to deeper network layers. This scheme is substantially different
from "deep supervision" in which the loss layer is re-introduced to earlier
layers. The parallel path provided by fast-forwarding enhances the training
process in two ways. First, it enables the individual layers to combine
higher-level information (from the standard processing path) with lower-level
information (from the fast-forward path). Second, this new architecture reduces
the problem of vanishing gradients substantially because the fast-forwarding
path provides a shorter route for gradient backpropagation. In order to
evaluate the utility of the proposed technique, a Fast-Forward Network (FFNet),
with 20 convolutional layers along with parallel fast-forward paths, has been
created and tested. The paper presents empirical results that demonstrate
improved learning capacity of FFNet due to fast-forwarding, as compared to
GoogLeNet (with deep supervision) and CaffeNet, which are 4x and 18x larger in
size, respectively. All of the source code and deep learning models described
in this paper will be made available to the entire research communityComment: Accepted in the 14th International Conference on Image Analysis and
Recognition (ICIAR) 2017, Montreal, Canad
Learning Edge Representations via Low-Rank Asymmetric Projections
We propose a new method for embedding graphs while preserving directed edge
information. Learning such continuous-space vector representations (or
embeddings) of nodes in a graph is an important first step for using network
information (from social networks, user-item graphs, knowledge bases, etc.) in
many machine learning tasks.
Unlike previous work, we (1) explicitly model an edge as a function of node
embeddings, and we (2) propose a novel objective, the "graph likelihood", which
contrasts information from sampled random walks with non-existent edges.
Individually, both of these contributions improve the learned representations,
especially when there are memory constraints on the total size of the
embeddings. When combined, our contributions enable us to significantly improve
the state-of-the-art by learning more concise representations that better
preserve the graph structure.
We evaluate our method on a variety of link-prediction task including social
networks, collaboration networks, and protein interactions, showing that our
proposed method learn representations with error reductions of up to 76% and
55%, on directed and undirected graphs. In addition, we show that the
representations learned by our method are quite space efficient, producing
embeddings which have higher structure-preserving accuracy but are 10 times
smaller
Collaborative Layer-wise Discriminative Learning in Deep Neural Networks
Intermediate features at different layers of a deep neural network are known
to be discriminative for visual patterns of different complexities. However,
most existing works ignore such cross-layer heterogeneities when classifying
samples of different complexities. For example, if a training sample has
already been correctly classified at a specific layer with high confidence, we
argue that it is unnecessary to enforce rest layers to classify this sample
correctly and a better strategy is to encourage those layers to focus on other
samples.
In this paper, we propose a layer-wise discriminative learning method to
enhance the discriminative capability of a deep network by allowing its layers
to work collaboratively for classification. Towards this target, we introduce
multiple classifiers on top of multiple layers. Each classifier not only tries
to correctly classify the features from its input layer, but also coordinates
with other classifiers to jointly maximize the final classification
performance. Guided by the other companion classifiers, each classifier learns
to concentrate on certain training examples and boosts the overall performance.
Allowing for end-to-end training, our method can be conveniently embedded into
state-of-the-art deep networks. Experiments with multiple popular deep
networks, including Network in Network, GoogLeNet and VGGNet, on scale-various
object classification benchmarks, including CIFAR100, MNIST and ImageNet, and
scene classification benchmarks, including MIT67, SUN397 and Places205,
demonstrate the effectiveness of our method. In addition, we also analyze the
relationship between the proposed method and classical conditional random
fields models.Comment: To appear in ECCV 2016. Maybe subject to minor changes before
camera-ready versio
- …