7 research outputs found

    Exploring Interpretable LSTM Neural Networks over Multi-Variable Data

    Full text link
    For recurrent neural networks trained on time series with target and exogenous variables, in addition to accurate prediction, it is also desired to provide interpretable insights into the data. In this paper, we explore the structure of LSTM recurrent neural networks to learn variable-wise hidden states, with the aim to capture different dynamics in multi-variable time series and distinguish the contribution of variables to the prediction. With these variable-wise hidden states, a mixture attention mechanism is proposed to model the generative process of the target. Then we develop associated training methods to jointly learn network parameters, variable and temporal importance w.r.t the prediction of the target variable. Extensive experiments on real datasets demonstrate enhanced prediction performance by capturing the dynamics of different variables. Meanwhile, we evaluate the interpretation results both qualitatively and quantitatively. It exhibits the prospect as an end-to-end framework for both forecasting and knowledge extraction over multi-variable data.Comment: Accepted to International Conference on Machine Learning (ICML), 201

    Focused hierarchical RNNs for conditional sequence processing

    No full text
    Recurrent Neural Networks (RNNs) with atten-tion mechanisms have obtained state-of-the-artresults for many sequence processing tasks. Mostof these models use a simple form of encoderwith attention that looks over the entire sequenceand assigns a weight to each token indepen-dently.We present a mechanism for focus-ing RNN encoders for sequence modelling taskswhich allows them to attend to key parts of theinput as needed. We formulate this using a multi-layer conditional sequence encoder that reads inone token at a time and makes a discrete deci-sion on whether the token is relevant to the con-text or question being asked. The discrete gatingmechanism takes in the context embedding andthe current hidden state as inputs and controls in-formation flow into the layer above. We train itusing policy gradient methods. We evaluate thismethod on several types of tasks with differentattributes. First, we evaluate the method on syn-thetic tasks which allow us to evaluate the modelfor its generalization ability and probe the behav-ior of the gates in more controlled settings. Wethen evaluate this approach on large scale Ques-tion Answering tasks including the challengingMS MARCO and SearchQA tasks. Our mod-els shows consistent improvements for both tasksover prior work and our baselines. It has alsoshown to generalize significantly better on syn-thetic tasks as compared to the baselines

    Focused hierarchical RNNs for conditional sequence processing

    No full text
    Recurrent Neural Networks (RNNs) with attention mechanisms have obtained state-of-the-art results for many sequence processing tasks. Most of these models use a simple form of encoder with attention that looks over the entire sequence and assigns a weight to each token independently. We present a mechanism for focusing RNN encoders for sequence modelling tasks which allows them to attend to key parts of the input as needed. We formulate this using a multi-layer conditional sequence encoder that reads in one token at a time and makes a discrete decision on whether the token is relevant to the context or question being asked. The discrete gating mechanism takes in the context embedding and the current hidden state as inputs and controls information flow into the layer above. We train it using policy gradient methods. We evaluate this method on several types of tasks with different attributes. First, we evaluate the method on synthetic tasks which allow us to evaluate the model for its generalization ability and probe the behavior of the gates in more controlled settings. We then evaluate this approach on large scale Question Answering tasks including the challenging MS MARCO and SearchQA tasks. Our models shows consistent improvements for both tasks over prior work and our baselines. It has also shown to generalize significantly better on synthetic tasks as compared to the baselines.Comment: To appear at ICML 201
    corecore