26,479 research outputs found
Generalized multi-scale stacked sequential learning for multi-class classification
In many classification problems, neighbor data labels have inherent sequential relationships. Sequential learning algorithms take benefit of these relationships in order to improve generalization. In this paper, we revise the multi-scale sequential learning approach (MSSL) for applying it in the multi-class case (MMSSL). We introduce the error-correcting output codesframework in the MSSL classifiers and propose a formulation for calculating confidence maps from the margins of the base classifiers. In addition, we propose a MMSSL compression approach which reduces the number of features in the extended data set without a loss in performance. The proposed methods are tested on several databases, showing significant performance improvement compared to classical approaches
Modelling Sequential Music Track Skips using a Multi-RNN Approach
Modelling sequential music skips provides streaming companies the ability to
better understand the needs of the user base, resulting in a better user
experience by reducing the need to manually skip certain music tracks. This
paper describes the solution of the University of Copenhagen DIKU-IR team in
the 'Spotify Sequential Skip Prediction Challenge', where the task was to
predict the skip behaviour of the second half in a music listening session
conditioned on the first half. We model this task using a Multi-RNN approach
consisting of two distinct stacked recurrent neural networks, where one network
focuses on encoding the first half of the session and the other network focuses
on utilizing the encoding to make sequential skip predictions. The encoder
network is initialized by a learned session-wide music encoding, and both of
them utilize a learned track embedding. Our final model consists of a majority
voted ensemble of individually trained models, and ranked 2nd out of 45
participating teams in the competition with a mean average accuracy of 0.641
and an accuracy on the first skip prediction of 0.807. Our code is released at
https://github.com/Varyn/WSDM-challenge-2019-spotify.Comment: 4 page
Tensorized Self-Attention: Efficiently Modeling Pairwise and Global Dependencies Together
Neural networks equipped with self-attention have parallelizable computation,
light-weight structure, and the ability to capture both long-range and local
dependencies. Further, their expressive power and performance can be boosted by
using a vector to measure pairwise dependency, but this requires to expand the
alignment matrix to a tensor, which results in memory and computation
bottlenecks. In this paper, we propose a novel attention mechanism called
"Multi-mask Tensorized Self-Attention" (MTSA), which is as fast and as
memory-efficient as a CNN, but significantly outperforms previous
CNN-/RNN-/attention-based models. MTSA 1) captures both pairwise (token2token)
and global (source2token) dependencies by a novel compatibility function
composed of dot-product and additive attentions, 2) uses a tensor to represent
the feature-wise alignment scores for better expressive power but only requires
parallelizable matrix multiplications, and 3) combines multi-head with
multi-dimensional attentions, and applies a distinct positional mask to each
head (subspace), so the memory and computation can be distributed to multiple
heads, each with sequential information encoded independently. The experiments
show that a CNN/RNN-free model based on MTSA achieves state-of-the-art or
competitive performance on nine NLP benchmarks with compelling memory- and
time-efficiency
Stacking-based Deep Neural Network: Deep Analytic Network on Convolutional Spectral Histogram Features
Stacking-based deep neural network (S-DNN), in general, denotes a deep neural
network (DNN) resemblance in terms of its very deep, feedforward network
architecture. The typical S-DNN aggregates a variable number of individually
learnable modules in series to assemble a DNN-alike alternative to the targeted
object recognition tasks. This work likewise devises an S-DNN instantiation,
dubbed deep analytic network (DAN), on top of the spectral histogram (SH)
features. The DAN learning principle relies on ridge regression, and some key
DNN constituents, specifically, rectified linear unit, fine-tuning, and
normalization. The DAN aptitude is scrutinized on three repositories of varying
domains, including FERET (faces), MNIST (handwritten digits), and CIFAR10
(natural objects). The empirical results unveil that DAN escalates the SH
baseline performance over a sufficiently deep layer.Comment: 5 page
- …