5,275 research outputs found
Bottom-Up and Top-Down Reasoning with Hierarchical Rectified Gaussians
Convolutional neural nets (CNNs) have demonstrated remarkable performance in
recent history. Such approaches tend to work in a unidirectional bottom-up
feed-forward fashion. However, practical experience and biological evidence
tells us that feedback plays a crucial role, particularly for detailed spatial
understanding tasks. This work explores bidirectional architectures that also
reason with top-down feedback: neural units are influenced by both lower and
higher-level units.
We do so by treating units as rectified latent variables in a quadratic
energy function, which can be seen as a hierarchical Rectified Gaussian model
(RGs). We show that RGs can be optimized with a quadratic program (QP), that
can in turn be optimized with a recurrent neural network (with rectified linear
units). This allows RGs to be trained with GPU-optimized gradient descent. From
a theoretical perspective, RGs help establish a connection between CNNs and
hierarchical probabilistic models. From a practical perspective, RGs are well
suited for detailed spatial tasks that can benefit from top-down reasoning. We
illustrate them on the challenging task of keypoint localization under
occlusions, where local bottom-up evidence may be misleading. We demonstrate
state-of-the-art results on challenging benchmarks.Comment: To appear in CVPR 201
DISCO Nets: DISsimilarity COefficient Networks
We present a new type of probabilistic model which we call DISsimilarity
COefficient Networks (DISCO Nets). DISCO Nets allow us to efficiently sample
from a posterior distribution parametrised by a neural network. During
training, DISCO Nets are learned by minimising the dissimilarity coefficient
between the true distribution and the estimated distribution. This allows us to
tailor the training to the loss related to the task at hand. We empirically
show that (i) by modeling uncertainty on the output value, DISCO Nets
outperform equivalent non-probabilistic predictive networks and (ii) DISCO Nets
accurately model the uncertainty of the output, outperforming existing
probabilistic models based on deep neural networks
To go deep or wide in learning?
To achieve acceptable performance for AI tasks, one can either use
sophisticated feature extraction methods as the first layer in a two-layered
supervised learning model, or learn the features directly using a deep
(multi-layered) model. While the first approach is very problem-specific, the
second approach has computational overheads in learning multiple layers and
fine-tuning of the model. In this paper, we propose an approach called wide
learning based on arc-cosine kernels, that learns a single layer of infinite
width. We propose exact and inexact learning strategies for wide learning and
show that wide learning with single layer outperforms single layer as well as
deep architectures of finite width for some benchmark datasets.Comment: 9 pages, 1 figure, Accepted for publication in Seventeenth
International Conference on Artificial Intelligence and Statistic
Weakly Supervised Audio Source Separation via Spectrum Energy Preserved Wasserstein Learning
Separating audio mixtures into individual instrument tracks has been a long
standing challenging task. We introduce a novel weakly supervised audio source
separation approach based on deep adversarial learning. Specifically, our loss
function adopts the Wasserstein distance which directly measures the
distribution distance between the separated sources and the real sources for
each individual source. Moreover, a global regularization term is added to
fulfill the spectrum energy preservation property regardless separation. Unlike
state-of-the-art weakly supervised models which often involve deliberately
devised constraints or careful model selection, our approach need little prior
model specification on the data, and can be straightforwardly learned in an
end-to-end fashion. We show that the proposed method performs competitively on
public benchmark against state-of-the-art weakly supervised methods
- …