2 research outputs found

    Recognizing Long Grammatical Sequences Using Recurrent Networks Augmented With An External Differentiable Stack

    Full text link
    Recurrent neural networks (RNNs) are a widely used deep architecture for sequence modeling, generation, and prediction. Despite success in applications such as machine translation and voice recognition, these stateful models have several critical shortcomings. Specifically, RNNs generalize poorly over very long sequences, which limits their applicability to many important temporal processing and time series forecasting problems. For example, RNNs struggle in recognizing complex context free languages (CFLs), never reaching 100% accuracy on training. One way to address these shortcomings is to couple an RNN with an external, differentiable memory structure, such as a stack. However, differentiable memories in prior work have neither been extensively studied on CFLs nor tested on sequences longer than those seen in training. The few efforts that have studied them have shown that continuous differentiable memory structures yield poor generalization for complex CFLs, making the RNN less interpretable. In this paper, we improve the memory-augmented RNN with important architectural and state updating mechanisms that ensure that the model learns to properly balance the use of its latent states with external memory. Our improved RNN models exhibit better generalization performance and are able to classify long strings generated by complex hierarchical context free grammars (CFGs). We evaluate our models on CGGs, including the Dyck languages, as well as on the Penn Treebank language modelling task, and achieve stable, robust performance across these benchmarks. Furthermore, we show that only our memory-augmented networks are capable of retaining memory for a longer duration up to strings of length 160.Comment: 14 pages, 10 table

    Second-Order Recurrent Neural Networks Can Learn Regular Grammars From Noisy Strings

    No full text
    Recent work has shown that second-order recurrent neural networks (2ORNNs) may be used to infer deterministic finite automata (DFA) when trained with positive and negative string examples. This paper shows that 2ORNN can also learn DFA from samples consisting of pairs (W; ¯W ) where W is a noisy string of input vectors describing the degree of resemblance of every input to the symbols in the alphabet, and ¯W is the degree of acceptance of the noisy string, computed with a DFA whose behavior has been extended to deal with noisy strings. 1 Introduction A number of recent papers have explored the ability of second-order recurrent neural networks (2ORNNs) to learn simple regular languages[1, 2, 3, 4, 5]. As noted in [1, 2], it is possible to obtain a symbolic representation of the language ---i.e., the states and transitions of a deterministic finite automaton (DFA) accepting the language--- from the trained network. For this purpose, a complete sample consisting of a collection of stri..
    corecore