114 research outputs found

    empathi: An ontology for Emergency Managing and Planning about Hazard Crisis

    Full text link
    In the domain of emergency management during hazard crises, having sufficient situational awareness information is critical. It requires capturing and integrating information from sources such as satellite images, local sensors and social media content generated by local people. A bold obstacle to capturing, representing and integrating such heterogeneous and diverse information is lack of a proper ontology which properly conceptualizes this domain, aggregates and unifies datasets. Thus, in this paper, we introduce empathi ontology which conceptualizes the core concepts concerning with the domain of emergency managing and planning of hazard crises. Although empathi has a coarse-grained view, it considers the necessary concepts and relations being essential in this domain. This ontology is available at https://w3id.org/empathi/

    Knowledge-Infused Learning

    Get PDF
    In DARPA’s view of the three waves of AI, the first wave of AI, symbolic AI, focused on explicit knowledge. The second and current wave of AI is termed statistical AI. Deep learning techniques have been able to exploit large amounts of data and massive computational power to improve human levels of performance in narrowly defined tasks. Separately, knowledge graphs have emerged as a powerful tool to capture and exploit a variety of explicit knowledge to make algorithms better apprehend the content and enable the next generation of data processing, such as semantic search. After initial hesitancy about the scalability of the knowledge creation process, the last decade has seen significant growth in developing and applying knowledge, usually in the form of knowledge graphs. Examples range from the use of DBPedia in IBM’s Watson to Google Knowledge Graph in Google Semantic Search to the application of ProteinBank in AlphaFold, recognized by many as the most significant AI breakthrough. Furthermore, numerous domain-specific knowledge graphs/sources have been applied to improve AI methods in diverse domains such as medicine, healthcare, finance, manufacturing, and defense. Now, we move towards the third wave of AI built on the Neuro-Symbolic approach that combines the strengths of statistical and symbolic AI. Combining the respective powers and benefits of using knowledge graphs and deep learning is particularly attractive. This has led to the development of an approach and practice in computer science termed knowledge-infused (deep) learning (KiL). This dissertation will serve as a primer on methods that use diverse forms of knowledge: linguistic, commonsense, broad-based, and domain-specific and provide novel evaluation metrics to assess knowledge-infusion algorithms on various datasets, like social media, clinical interviews, electronic health records, information-seeking dialogues, and others. Specifically, this dissertation will provide necessary grounding in shallow infusion, semi-deep infusion, and a more advanced form called deep infusion to alleviate five bottlenecks in statistical AI: (1) Context Sensitivity, (2) Handling Uncertainty and Risk, (3) Interpretability, (4) User-level Explainability, and (5) Task Transferability. Further, the dissertation will introduce a new theoretical and conceptual approach called Process Knowledge Infusion, which enforces semantic flow in AI algorithms by altering their learning behavior with procedural knowledge. Such knowledge is manifested in questionnaires and guidelines that are usable by AI (or KiL) systems for sensible and safety-constrained response generation. The hurdle to prove the acceptability of KiL in AI and natural language understanding community lies in the absence of realistic datasets that can demonstrate five bottlenecks in statistical AI. The dissertation describes the process involved in constructing a wide variety of gold-standard datasets using expert knowledge, questionnaires, guidelines, and knowledge graphs. These datasets challenge statistical AI on explainability, interpretability, uncertainty, and context-sensitivity and showcase remarkable performance gains obtained by KiL-based algorithms. This dissertation termed these gold-standard datasets as Knowledge-intensive Language Understanding (KILU) tasks and considered them complementary to well-adopted General Language Understanding and Evaluation (GLUE) benchmarks. On KILU and GLUE datasets, KiL-based algorithms outperformed existing state-of-the-arts in natural language generation and classification problems. Furthermore, KiL-based algorithms provided user-understandable explanations in sensitive problems like Mental Health by highlighting concepts that depicts the reason behind model’s prediction or generation. Mapping of these concepts to entities in external knowledge source can support experts with user-level explanations and reasoning. A cohort-based qualitative evaluation informed that KiL should support stronger interleaving of a greater variety of knowledge at different levels of abstraction with layers in a deep learning architecture. This would enforce controlled knowledge infusion and prevent model from extrapolating or overgeneralization. This dissertation open future research questions on neural models within the domain of natural language understanding. For instance, (a) Which layer within a deep neural language model (NLMs) require knowledge? (b) It is known that NLMs learn by abstraction. How to leverage external knowledge’s inherent abstraction in enhancing the context of learned statistical representation? (c) Layered knowledge infusion might result in high-energy nodes contributing to the outcome. This is counter to the current softmaxbased predictions. How to pick the most probable outcome? and others. This dissertation provide a firsthand towards addressing these questions; however, much efficient methods are needed that provide user-level explanations, be interpretable, and propel safe AI

    Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

    Get PDF
    The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications.Comment: 6 pages + references, 4 figures, Accepted to IEEE internet computing 202

    Knowledge Infused Learning (K-IL): Towards Deep Incorporation of Knowledge in Deep Learning

    Get PDF
    Learning the underlying patterns in data goes beyond instance-based generalization to external knowledge represented in structured graphs or networks. Deep learning that primarily constitutes neural computing stream in AI has shown significant advances in probabilistically learning latent patterns using a multi-layered network of computational nodes (i.e., neurons/hidden units). Structured knowledge that underlies symbolic computing approaches and often supports reasoning, has also seen significant growth in recent years, in the form of broad-based (e.g., DBPedia, Yago) and domain, industry or application specific knowledge graphs. A common substrate with careful integration of the two will raise opportunities to develop neuro-symbolic learning approaches for AI, where conceptual and probabilistic representations are combined. As the incorporation of external knowledge will aid in supervising the learning of features for the model, deep infusion of representational knowledge from knowledge graphs within hidden layers will further enhance the learning process. Although much work remains, we believe that knowledge graphs will play an increasing role in developing hybrid neuro-symbolic intelligent systems (bottom-up deep learning with top-down symbolic computing) as well as in building explainable AI systems for which knowledge graphs will provide scaffolding for punctuating neural computing. In this position paper, we describe our motivation for such a neuro-symbolic approach and framework that combines knowledge graph and neural networks

    Neurosymbolic AI - Why, What, and How

    Get PDF
    Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of AI, refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision-making in safety-critical applications such as healthcare, criminal justice, and autonomous driving. While datadriven neural network-based AI algorithms effectively model machine perception, symbolic knowledge-based AI is better suited for modeling machine cognition. This is because symbolic knowledge structures support explicit representations of mappings from perception outputs to the knowledge, enabling traceability and auditing of the AI system’s decisions. Such audit trails are useful for enforcing application aspects of safety, such as regulatory compliance and explainability, through tracking the AI system’s inputs, outputs, and intermediate steps. This first article in the Neurosymbolic AI department introduces and provides an overview of the rapidly emerging paradigm of Neurosymbolic AI, combining neural networks and knowledge-guided symbolic approaches to create more capable and flexible AI systems. These systems have immense potential to advance both algorithm-level (e.g., abstraction, analogy, reasoning) and application-level (e.g., explainable and safety-constrained decision-making) capabilities of AI systems

    Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?

    Get PDF
    The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications
    • …
    corecore