5 research outputs found

    Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

    Get PDF
    The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications.Comment: 6 pages + references, 4 figures, Accepted to IEEE internet computing 202

    Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?

    Get PDF
    The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications

    Explainable AI Using Knowledge Graphs

    Get PDF
    During the last decade, traditional data-driven deep learning (DL) has shown remarkable success in essential natural language processing tasks, such as relation extraction. Yet, challenges remain in developing artificial intelligence (AI) methods in real-world cases that require explainability through human interpretable and traceable outcomes. The scarcity of labeled data for downstream supervised tasks and entangled embeddings produced as an outcome of self-supervised pre-training objectives also hinders interpretability and explainability. Additionally, data labeling in multiple unstructured domains, particularly healthcare and education, is computationally expensive as it requires a pool of human expertise. Consider Education Technology, where AI systems fall along a “capability spectrum” depending on how extensively they exploit various resources, such as academic content, granularity in student engagement, academic domain experts, and knowledge bases to identify concepts that would help achieve knowledge mastery for student goals. Likewise, the task of assessing human health using online conversations raises challenges for current statistical DL methods through evolving cultural and context-specific discussions. Hence, developing strategies that merge AI with stratified knowledge to identify concepts that would delineate healthcare conversation patterns and help healthcare professionals decide. Such technological innovations are imperative as they provide consistency and explainability in outcomes. This tutorial discusses the notion of explainability and interpretability through the use of knowledge graphs in (1) Healthcare on the Web, (2) Education Technology. This tutorial will provide details of knowledge-infused learning algorithms and its contribution to explainability for the above two applications that can be applied to any other domain using knowledge graphs

    TDLR: Top (\u3cem\u3eSemantic\u3c/em\u3e)-Down (\u3cem\u3eSyntactic\u3c/em\u3e) Language Representation

    Get PDF
    Language understanding involves processing text with both the grammatical and common-sense contexts of the text fragments. The text “I went to the grocery store and brought home a car” requires both the grammatical context (syntactic) and common-sense context (semantic) to capture the oddity in the sentence. Contextualized text representations learned by Language Models (LMs) are expected to capture a variety of syntactic and semantic contexts from large amounts of training data corpora. Recent work such as ERNIE has shown that infusing the knowledge contexts, where they are available in LMs, results in significant performance gains on General Language Understanding (GLUE) benchmark tasks. However, to our knowledge, no knowledge-aware model has attempted to infuse knowledge through top-down semantics-driven syntactic processing (Eg: Common-sense to Grammatical) and directly operated on the attention mechanism that LMs leverage to learn the data context. We propose a learning framework Top-Down Language Representation (TDLR) to infuse common-sense semantics into LMs. In our implementation, we build on BERT for its rich syntactic knowledge and use the knowledge graphs ConceptNet and WordNet to infuse semantic knowledge
    corecore