33,452 research outputs found
Topological Gradient-based Competitive Learning
Topological learning is a wide research area aiming at uncovering the mutual spatial relationships between the elements of a set. Some of the most common and oldest approaches involve the use of unsupervised competitive neural networks. However, these methods are not based on gradient optimization which has been proven to provide striking results in feature extraction also in unsupervised learning. Unfortunately, by focusing mostly on algorithmic efficiency and accuracy, deep clustering techniques are composed of overly complex feature extractors, while using trivial algorithms in their top layer. The aim of this work is to present a novel comprehensive theory aspiring at bridging competitive learning with gradient-based learning, thus allowing the use of extremely powerful deep neural networks for feature extraction and projection combined with the remarkable flexibility and expressiveness of competitive learning. In this paper we fully demonstrate the theoretical equivalence of two novel gradient-based competitive layers. Preliminary experiments show how the dual approach, trained on the transpose of the input matrix i.e. X T , lead to faster convergence rate and higher training accuracy both in low and high-dimensional scenarios
Dynamic learning rates for continual unsupervised learning.
The dilemma between stability and plasticity is crucial in machine learning, especially when non-stationary input
distributions are considered. This issue can be addressed by continual learning in order to alleviate catastrophic forgetting. This
strategy has been previously proposed for supervised and reinforcement learning models. However, little attention has been devoted
to unsupervised learning. This work presents a dynamic learning rate framework for unsupervised neural networks that can handle
non-stationary distributions. In order for the model to adapt to the input as it changes its characteristics, a varying learning rate
that does not merely depend on the training step but on the reconstruction error has been proposed. In the experiments, different
configurations for classical competitive neural networks, self-organizing maps and growing neural gas with either per-neuron or
per-network dynamic learning rate have been tested. Experimental results on document clustering tasks demonstrate the suitability
of the proposal for real-world problems
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
We address the unsupervised learning of several interconnected problems in
low-level vision: single view depth prediction, camera motion estimation,
optical flow, and segmentation of a video into the static scene and moving
regions. Our key insight is that these four fundamental vision problems are
coupled through geometric constraints. Consequently, learning to solve them
together simplifies the problem because the solutions can reinforce each other.
We go beyond previous work by exploiting geometry more explicitly and
segmenting the scene into static and moving regions. To that end, we introduce
Competitive Collaboration, a framework that facilitates the coordinated
training of multiple specialized neural networks to solve complex problems.
Competitive Collaboration works much like expectation-maximization, but with
neural networks that act as both competitors to explain pixels that correspond
to static or moving regions, and as collaborators through a moderator that
assigns pixels to be either static or independently moving. Our novel method
integrates all these problems in a common framework and simultaneously reasons
about the segmentation of the scene into moving objects and the static
background, the camera motion, depth of the static scene structure, and the
optical flow of moving objects. Our model is trained without any supervision
and achieves state-of-the-art performance among joint unsupervised methods on
all sub-problems.Comment: CVPR 201
Dynamic Optimal Training for Competitive Neural Networks
This paper introduces an unsupervised learning algorithm for optimal training of competitive neural networks. The learning rule of this algorithm is rived from the minimization of a new objective criterion using the gradient descent technique. Its learning rate and competition difficulty are dynamically adjusted throughout iterations. Numerical results that illustrate the performance of this algorithm in unsupervised pattern classification and image compression are also presented, discussed, and compared to those provided by other well-known algorithms for several examples of real test data
Spiking Inception Module for Multi-layer Unsupervised Spiking Neural Networks
Spiking Neural Network (SNN), as a brain-inspired approach, is attracting
attention due to its potential to produce ultra-high-energy-efficient hardware.
Competitive learning based on Spike-Timing-Dependent Plasticity (STDP) is a
popular method to train an unsupervised SNN. However, previous unsupervised
SNNs trained through this method are limited to a shallow network with only one
learnable layer and cannot achieve satisfactory results when compared with
multi-layer SNNs. In this paper, we eased this limitation by: 1)We proposed a
Spiking Inception (Sp-Inception) module, inspired by the Inception module in
the Artificial Neural Network (ANN) literature. This module is trained through
STDP-based competitive learning and outperforms the baseline modules on
learning capability, learning efficiency, and robustness. 2)We proposed a
Pooling-Reshape-Activate (PRA) layer to make the Sp-Inception module stackable.
3)We stacked multiple Sp-Inception modules to construct multi-layer SNNs. Our
algorithm outperforms the baseline algorithms on the hand-written digit
classification task, and reaches state-of-the-art results on the MNIST dataset
among the existing unsupervised SNNs.Comment: Published at the 2020 International Joint Conference on Neural
Networks (IJCNN); Extended from arXiv:2001.0168
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
In this paper, we introduce a new image representation based on a multilayer
kernel machine. Unlike traditional kernel methods where data representation is
decoupled from the prediction task, we learn how to shape the kernel with
supervision. We proceed by first proposing improvements of the
recently-introduced convolutional kernel networks (CKNs) in the context of
unsupervised learning; then, we derive backpropagation rules to take advantage
of labeled training data. The resulting model is a new type of convolutional
neural network, where optimizing the filters at each layer is equivalent to
learning a linear subspace in a reproducing kernel Hilbert space (RKHS). We
show that our method achieves reasonably competitive performance for image
classification on some standard "deep learning" datasets such as CIFAR-10 and
SVHN, and also for image super-resolution, demonstrating the applicability of
our approach to a large variety of image-related tasks.Comment: to appear in Advances in Neural Information Processing Systems (NIPS
Deep Hashing Network for Unsupervised Domain Adaptation
In recent years, deep neural networks have emerged as a dominant machine
learning tool for a wide variety of application domains. However, training a
deep neural network requires a large amount of labeled data, which is an
expensive process in terms of time, labor and human expertise. Domain
adaptation or transfer learning algorithms address this challenge by leveraging
labeled data in a different, but related source domain, to develop a model for
the target domain. Further, the explosive growth of digital data has posed a
fundamental challenge concerning its storage and retrieval. Due to its storage
and retrieval efficiency, recent years have witnessed a wide application of
hashing in a variety of computer vision applications. In this paper, we first
introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms.
The dataset contains images of a variety of everyday objects from multiple
domains. We then propose a novel deep learning framework that can exploit
labeled source data and unlabeled target data to learn informative hash codes,
to accurately classify unseen target data. To the best of our knowledge, this
is the first research effort to exploit the feature learning capabilities of
deep neural networks to learn representative hash codes to address the domain
adaptation problem. Our extensive empirical studies on multiple transfer tasks
corroborate the usefulness of the framework in learning efficient hash codes
which outperform existing competitive baselines for unsupervised domain
adaptation.Comment: CVPR 201
- …