161 research outputs found
SOM-VAE: Interpretable Discrete Representation Learning on Time Series
High-dimensional time series are common in many domains. Since human
cognition is not optimized to work well in high-dimensional spaces, these areas
could benefit from interpretable low-dimensional representations. However, most
representation learning algorithms for time series data are difficult to
interpret. This is due to non-intuitive mappings from data features to salient
properties of the representation and non-smoothness over time. To address this
problem, we propose a new representation learning framework building on ideas
from interpretable discrete dimensionality reduction and deep generative
modeling. This framework allows us to learn discrete representations of time
series, which give rise to smooth and interpretable embeddings with superior
clustering performance. We introduce a new way to overcome the
non-differentiability in discrete representation learning and present a
gradient-based version of the traditional self-organizing map algorithm that is
more performant than the original. Furthermore, to allow for a probabilistic
interpretation of our method, we integrate a Markov model in the representation
space. This model uncovers the temporal transition structure, improves
clustering performance even further and provides additional explanatory
insights as well as a natural representation of uncertainty. We evaluate our
model in terms of clustering performance and interpretability on static
(Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST
images, a chaotic Lorenz attractor system with two macro states, as well as on
a challenging real world medical time series application on the eICU data set.
Our learned representations compare favorably with competitor methods and
facilitate downstream tasks on the real world data.Comment: Accepted for publication at the Seventh International Conference on
Learning Representations (ICLR 2019
Topological Neural Discrete Representation Learning \`a la Kohonen
Unsupervised learning of discrete representations from continuous ones in
neural networks (NNs) is the cornerstone of several applications today. Vector
Quantisation (VQ) has become a popular method to achieve such representations,
in particular in the context of generative models such as Variational
Auto-Encoders (VAEs). For example, the exponential moving average-based VQ
(EMA-VQ) algorithm is often used. Here we study an alternative VQ algorithm
based on the learning rule of Kohonen Self-Organising Maps (KSOMs; 1982) of
which EMA-VQ is a special case. In fact, KSOM is a classic VQ algorithm which
is known to offer two potential benefits over the latter: empirically, KSOM is
known to perform faster VQ, and discrete representations learned by KSOM form a
topological structure on the grid whose nodes are the discrete symbols,
resulting in an artificial version of the topographic map in the brain. We
revisit these properties by using KSOM in VQ-VAEs for image processing. In
particular, our experiments show that, while the speed-up compared to
well-configured EMA-VQ is only observable at the beginning of training, KSOM is
generally much more robust than EMA-VQ, e.g., w.r.t. the choice of
initialisation schemes. Our code is public.Comment: Two first author
- …