173 research outputs found

    Sensor Data and Perception: Can Sensors Play 20 Questions

    Get PDF
    Currently, there are many sensors collecting information about our environment, leading to an overwhelming number of observations that must be analyzed and explained in order to achieve situation awareness. As perceptual beings, we are also constantly inundated with sensory data, yet we are able to make sense of our environment with relative ease. Why is the task of perception so easy for us, and so hard for machines; and could this have anything to do with how we play the game 20 Questions

    Sensor Data Management

    Get PDF

    A Benchmark Knowledge Graph of Driving Scenes for Knowledge Completion Tasks

    Get PDF
    Knowledge graph completion (KGC) is a problem of significant importance due to the inherent incompleteness in knowledge graphs (KGs). The current approaches for KGC using link prediction (LP) mostly rely on a common set of benchmark datasets that are quite different from real-world industrial KGs. Therefore, the adaptability of current LP methods for real-world KGs and domain-specific ap- plications is questionable. To support the evaluation of current and future LP and KGC methods for industrial KGs, we introduce DSceneKG, a suite of real-world driving scene knowledge graphs that are currently being used across various industrial applications. The DSceneKG is publicly available at: https://github.com/ruwantw/DSceneKG

    Causal Neuro-Symbolic AI for Root Cause Analysis in Smart Manufacturing

    Get PDF
    Root cause analysis is the process of investigating the cause of a failure and providing measures to prevent future failures. It is an active area of research due to the complexities in manufacturing production lines and the vast amount of data that requires manual inspection. We present a combined approach of causal neuro-symbolic AI for root cause analysis to identify failures in smart manufacturing production lines. We have used data from an industry-grade rocket assembly line and a simulation package to demonstrate the effectiveness and relevance of our approach

    Sensor Networks Survey

    Get PDF

    An Ontology Design Pattern for Representing Causality

    Get PDF
    The causal pattern is a proposed ontology design pattern for representing the structure of causal relations in a knowledge graph. This pattern is grounded in the concepts defined and used by the CausalAI community i.e., Causal Bayesian Networks and do-calculus. Specifically, the pattern models three primary concepts: (1) causal relations, (2) causal event roles, and (3) causal effect weights. Two use cases involving a sprinkler system and asthma patients are provided along with their relevant competency questions

    Causal Knowledge Graph for Scene Understanding in Autonomous Driving

    Get PDF
    The current approaches to autonomous driving focus on learning from observation or simulated data. These approaches are based on correlations rather than causation. For safety-critical applications, like autonomous driving, it’s important to represent causal dependencies among variables in addition to the domain knowledge expressed in a knowledge graph. This will allow for a better understanding of causation during scenarios that have not been observed, such as malfunctions or accidents. The causal knowledge graph, coupled with domain knowledge, demonstrates how autonomous driving scenes can be represented, learned, and explained using counterfactual and intervention reasoning to infer and understand the behavior of entities in the scene

    An Evaluation of Knowledge Graph Embeddings for Autonomous Driving Data: Experience and Practice

    Full text link
    The autonomous driving (AD) industry is exploring the use of knowledge graphs (KGs) to manage the vast amount of heterogeneous data generated from vehicular sensors. The various types of equipped sensors include video, LIDAR and RADAR. Scene understanding is an important topic in AD which requires consideration of various aspects of a scene, such as detected objects, events, time and location. Recent work on knowledge graph embeddings (KGEs) - an approach that facilitates neuro-symbolic fusion - has shown to improve the predictive performance of machine learning models. With the expectation that neuro-symbolic fusion through KGEs will improve scene understanding, this research explores the generation and evaluation of KGEs for autonomous driving data. We also present an investigation of the relationship between the level of informational detail in a KG and the quality of its derivative embeddings. By systematically evaluating KGEs along four dimensions -- i.e. quality metrics, KG informational detail, algorithms, and datasets -- we show that (1) higher levels of informational detail in KGs lead to higher quality embeddings, (2) type and relation semantics are better captured by the semantic transitional distance-based TransE algorithm, and (3) some metrics, such as coherence measure, may not be suitable for intrinsically evaluating KGEs in this domain. Additionally, we also present an (early) investigation of the usefulness of KGEs for two use-cases in the AD domain.Comment: 11 pages, To appear in AAAI 2020 Spring Symposium on Combining Machine Learning and Knowledge Engineering in Practice (AAAI-MAKE 2020

    Ontology Design Metapattern for RelationType Role Composition

    Get PDF
    RelationType is a metapattern that specifies a property in a knowledge graph that directly links the head of a triple with the type of the tail. This metapattern is useful for knowledge graph link prediction tasks, specifically when one wants to predict the type of a linked entity rather than the entity instance itself. The RelationType metapattern serves as a template for future extensions of an ontology with more fine-grained domain information

    An Efficient Bit Vector Approach to Semantics-Based Machine Perception in Resource-Constrained Devices

    Get PDF
    The primary challenge of machine perception is to define efficient computational methods to derive high-level knowledge from low-level sensor observation data. Emerging solutions are using ontologies for expressive representation of concepts in the domain of sensing and perception, which enable advanced integration and interpretation of heterogeneous sensor data. The computational complexity of OWL, however, seriously limits its applicability and use within resource-constrained environments, such as mobile devices. To overcome this issue, we employ OWL to formally define the inference tasks needed for machine perception – explanation and discrimination – and then provide efficient algorithms for these tasks, using bit-vector encodings and operations. The applicability of our approach to machine perception is evaluated on a smart-phone mobile device, demonstrating dramatic improvements in both efficiency and scale
    • …
    corecore