822 research outputs found

    Automatic Stance Detection Using End-to-End Memory Networks

    Full text link
    We present a novel end-to-end memory network for stance detection, which jointly (i) predicts whether a document agrees, disagrees, discusses or is unrelated with respect to a given target claim, and also (ii) extracts snippets of evidence for that prediction. The network operates at the paragraph level and integrates convolutional and recurrent neural networks, as well as a similarity matrix as part of the overall architecture. The experimental evaluation on the Fake News Challenge dataset shows state-of-the-art performance.Comment: NAACL-2018; Stance detection; Fact-Checking; Veracity; Memory networks; Neural Networks; Distributed Representation

    End-to-End Memory Networks: A Survey

    Get PDF
    Constructing a dialog system which can speak naturally with a human is considered as a major challenge of artificial intelligence. End-to-end dialog system is taken to be a primary research topic in the area of conversational systems. Since an end-to-end dialog system is structured based on learning a dialog policy from transactional dialogs in a defined extent, therefore, useful datasets are required for evaluating the learning procedures. In this paper, different deep learning techniques are applied to the Dialog bAbI datasets. On this dataset, the performance of the proposed techniques is analyzed. The performance results demonstrate that all the proposed techniques attain decent precisions on the Dialog bAbI datasets. The best performance is obtained utilizing end-to-end memory network with a unified weight tying scheme (UN2N)

    Match memory recurrent networks

    Get PDF
    Imbuing neural networks with memory and attention mechanisms allows for better generalisation with fewer data samples. By focusing only on the relevant parts of data, which is encoded in an internal 'memory' format, the network is able to infer better and more reliable patterns. Most neuronal attention mechanisms are based on internal networks structures that impose a similarity metric (e.g., dot-product), followed by some (soft-)max operator. In this paper, we propose a novel attention method based on a function between neuron activities, which we term a 'match function', which is augmented by a recursive softmax function. We evaluate the algorithm on the bAbI question answering dataset and show that it has stronger performance when only one memory hop is used in both terms of average score and in terms the number of solved questions. Furthermore, with three memory hops, our algorithm can solve 12/20 benchmark questions using 1000 training samples per task. This is an improvement on the previous state of the art of 9/20 solved questions, which was held by end-to-end memory networks

    DeepStory: Video Story QA by Deep Embedded Memory Networks

    Full text link
    Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children's cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.Comment: 7 pages, accepted for IJCAI 201
    • …
    corecore