27 research outputs found
Analysing Errors of Open Information Extraction Systems
We report results on benchmarking Open Information Extraction (OIE) systems
using RelVis, a toolkit for benchmarking Open Information Extraction systems.
Our comprehensive benchmark contains three data sets from the news domain and
one data set from Wikipedia with overall 4522 labeled sentences and 11243
binary or n-ary OIE relations. In our analysis on these data sets we compared
the performance of four popular OIE systems, ClausIE, OpenIE 4.2, Stanford
OpenIE and PredPatt. In addition, we evaluated the impact of five common error
classes on a subset of 749 n-ary tuples. From our deep analysis we unreveal
important research directions for a next generation of OIE systems.Comment: Accepted at Building Linguistically Generalizable NLP Systems at
EMNLP 201
Towards interpreting recurrent neural networks through probabilistic abstraction
National Research Foundation (NRF) Singapore under its AI Singapore Programm
Neural Processing of Complex Continual Input Streams
Long Short-Term Memory (LSTM) can learn algorithms for temporal pattern processing not learnable by alternative recurrent neural networks (RNNs) or other methods such as Hidden Markov Models (HMMs) and symbolic grammar learning (SGL). Here we present tasks involving arithmetic operations on continual input streams that even LSTM cannot solve. But an LSTM variant based on "forget gates," a recent extension, has superior arithmetic capabilities and does solve the tasks
Learning to Forget: Continual Prediction with LSTM
Long Short-Term Memory (LSTM,[5]) can solve many tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams without explicitly marked sequence ends. Without resets, the internal state values may grow indefinitely and eventually cause the network to break down. Our remedy is an adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review an illustrative benchmark problem on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve a continual version of that problem. LSTM with forget gates, however, easily solves it in an elegant way. 1 Introduction Recurrent neural networks (RNNs) constitute a very powerful class of computational models, capable of instantiating almost arbitrary dynamics. The extent to which this potential can be exploited, is however limited b..