16 research outputs found

    Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking

    Full text link
    Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models. However, there has been little work on interpreting them, and specifically on understanding which parts of the graphs (e.g. syntactic trees or co-reference structures) contribute to a prediction. In this work, we introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges. Given a trained GNN model, we learn a simple classifier that, for every edge in every layer, predicts if that edge can be dropped. We demonstrate that such a classifier can be trained in a fully differentiable fashion, employing stochastic gates and encouraging sparsity through the expected L0L_0 norm. We use our technique as an attribution method to analyze GNN models for two tasks -- question answering and semantic role labeling -- providing insights into the information flow in these models. We show that we can drop a large proportion of edges without deteriorating the performance of the model, while we can analyse the remaining edges for interpreting model predictions

    How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking

    Get PDF
    Attribution methods assess the contribution of inputs to the model prediction. One way to do so is erasure: a subset of inputs is considered irrelevant if it can be removed without affecting the prediction. Though conceptually simple, erasure's objective is intractable and approximate search remains expensive with modern deep NLP models. Erasure is also susceptible to the hindsight bias: the fact that an input can be dropped does not mean that the model `knows' it can be dropped. The resulting pruning is over-aggressive and does not reflect how the model arrives at the prediction. To deal with these challenges, we introduce Differentiable Masking. DiffMask learns to mask-out subsets of the input while maintaining differentiability. The decision to include or disregard an input token is made with a simple model based on intermediate hidden layers of the analyzed model. First, this makes the approach efficient because we predict rather than search. Second, as with probing classifiers, this reveals what the network `knows' at the corresponding layers. This lets us not only plot attribution heatmaps but also analyze how decisions are formed across network layers. We use DiffMask to study BERT models on sentiment classification and question answering.Comment: Accepted at the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Source code available at https://github.com/nicola-decao/diffmask . 18 pages, 15 figures, 4 table

    Multimodal Automated Fact-Checking: A Survey

    Full text link
    Misinformation is often conveyed in multiple modalities, e.g. a miscaptioned image. Multimodal misinformation is perceived as more credible by humans, and spreads faster than its text-only counterparts. While an increasing body of research investigates automated fact-checking (AFC), previous surveys mostly focus on text. In this survey, we conceptualise a framework for AFC including subtasks unique to multimodal misinformation. Furthermore, we discuss related terms used in different communities and map them to our framework. We focus on four modalities prevalent in real-world fact-checking: text, image, audio, and video. We survey benchmarks and models, and discuss limitations and promising directions for future researchComment: The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP): Finding

    Competence of graph convolutional network in anti-money laundering in Bitcoin Blockchain

    Get PDF
    Graph networks are extensively used as an essential framework to analyse the interconnections between transactions and capture illicit behaviour in Bitcoin blockchain. Due to the complexity of Bitcoin transaction graph, the prediction of illicit transactions has become a challenging problem to unveil illicit services over the network. Graph Convolutional Network, a graph neural network based spectral approach, has recently emerged and gained much attention regarding graph-structured data. Previous research has highlighted the degraded performance of the latter approach to predict illicit transactions using, a Bitcoin transaction graph, so-called Elliptic data derived from Bitcoin blockchain. Motivated by the previous work, we seek to explore graph convolutions in a novel way. For this purpose, we present a novel approach that is modelled using the existing Graph Convolutional Network intertwined with linear layers. Concisely, we concatenate node embeddings obtained from graph convolutional layers with a single hidden layer derived from the linear transformation of the node feature matrix and followed by Multi-layer Perceptron. Our approach is evaluated using Elliptic data, wherein efficient accuracy is yielded. The proposed approach outperforms the original work of same data set

    UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering

    Full text link
    We study open-domain question answering with structured, unstructured and semi-structured knowledge sources, including text, tables, lists and knowledge bases. Departing from prior work, we propose a unifying approach that homogenizes all sources by reducing them to text and applies the retriever-reader model which has so far been limited to text sources only. Our approach greatly improves the results on knowledge-base QA tasks by 11 points, compared to latest graph-based methods. More importantly, we demonstrate that our unified knowledge (UniK-QA) model is a simple and yet effective way to combine heterogeneous sources of knowledge, advancing the state-of-the-art results on two popular question answering benchmarks, NaturalQuestions and WebQuestions, by 3.5 and 2.6 points, respectively

    NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned

    Get PDF
    We review the EfficientQA competition from NeurIPS 2020. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing retrieval corpora or the parameters of learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA
    corecore