1,238 research outputs found

    SOM-VAE: Interpretable Discrete Representation Learning on Time Series

    Full text link
    High-dimensional time series are common in many domains. Since human cognition is not optimized to work well in high-dimensional spaces, these areas could benefit from interpretable low-dimensional representations. However, most representation learning algorithms for time series data are difficult to interpret. This is due to non-intuitive mappings from data features to salient properties of the representation and non-smoothness over time. To address this problem, we propose a new representation learning framework building on ideas from interpretable discrete dimensionality reduction and deep generative modeling. This framework allows us to learn discrete representations of time series, which give rise to smooth and interpretable embeddings with superior clustering performance. We introduce a new way to overcome the non-differentiability in discrete representation learning and present a gradient-based version of the traditional self-organizing map algorithm that is more performant than the original. Furthermore, to allow for a probabilistic interpretation of our method, we integrate a Markov model in the representation space. This model uncovers the temporal transition structure, improves clustering performance even further and provides additional explanatory insights as well as a natural representation of uncertainty. We evaluate our model in terms of clustering performance and interpretability on static (Fashion-)MNIST data, a time series of linearly interpolated (Fashion-)MNIST images, a chaotic Lorenz attractor system with two macro states, as well as on a challenging real world medical time series application on the eICU data set. Our learned representations compare favorably with competitor methods and facilitate downstream tasks on the real world data.Comment: Accepted for publication at the Seventh International Conference on Learning Representations (ICLR 2019

    Evolino for recurrent support vector machines

    Full text link
    Traditional Support Vector Machines (SVMs) need pre-wired finite time windows to predict and classify time series. They do not have an internal state necessary to deal with sequences involving arbitrary long-term dependencies. Here we introduce a new class of recurrent, truly sequential SVM-like devices with internal adaptive states, trained by a novel method called EVOlution of systems with KErnel-based outputs (Evoke), an instance of the recent Evolino class of methods. Evoke evolves recurrent neural networks to detect and represent temporal dependencies while using quadratic programming/support vector regression to produce precise outputs. Evoke is the first SVM-based mechanism learning to classify a context-sensitive language. It also outperforms recent state-of-the-art gradient-based recurrent neural networks (RNNs) on various time series prediction tasks.Comment: 10 pages, 2 figure

    AI Methods in Algorithmic Composition: A Comprehensive Survey

    Get PDF
    Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project (IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC- 5123) from the Consejería de Innovación y Ciencia de Andalucía

    Speech and neural network dynamics

    Get PDF

    Using Recurrent Neural Networks To Forecasting of Forex

    Full text link
    This paper reports empirical evidence that a neural networks model is applicable to the statistically reliable prediction of foreign exchange rates. Time series data and technical indicators such as moving average, are fed to neural nets to capture the underlying "rules" of the movement in currency exchange rates. The trained recurrent neural networks forecast the exchange rates between American Dollar and four other major currencies, Japanese Yen, Swiss Frank, British Pound and EURO. Various statistical estimates of forecast quality have been carried out. Obtained results show, that neural networks are able to give forecast with coefficient of multiple determination not worse then 0.65. Linear and nonlinear statistical data preprocessing, such as Kolmogorov-Smirnov test and Hurst exponents for each currency were calculated and analyzed.Comment: 23 pages, 13 figure

    A Hybrid Approach for Time Series Forecasting Using Deep Learning and Nonlinear Autoregressive Neural Networks

    Get PDF
    During recent decades, several studies have been conducted in the field of weather forecasting providing various promising forecasting models. Nevertheless, the accuracy of the predictions still remains a challenge. In this paper a new forecasting approach is proposed: it implements a deep neural network based on a powerful feature extraction. The model is capable of deducing the irregular structure, non-linear trends and significant representations as features learnt from the data. It is a 6-layered deep architecture with 4 hidden units of Restricted Boltzmann Machine (RBM). The extracts from the last hidden layer are pre-processed, to support the accuracy achieved by the forecaster. The forecaster is a 2-layer ANN model with 35 hidden units for predicting the future intervals. It captures the correlations and regression patterns of the current sample related to the previous terms by using the learnt deep-hierarchal representations of data as an input to the forecaster
    • …
    corecore