3,162 research outputs found

    Semantic Matchmaking of Web Resources with Local Closed-World Reasoning

    Get PDF
    Ontology languages like OWL allow for semantically rich annotation of resources (e.g., products advertised at on-line electronic marketplaces). The description logic (DL) formalism underlying OWL provides reasoning techniques that perform match-making on such annotations. This paper identifies peculiarities in the use of DL inferences for matchmaking that derive from OWL\u27s open-world semantics, analyzes local closed-world reasoning for its applicability to matchmaking, and investigates the suitability of two nonmonotonic extensions to DL, autoepistemic DLs and DLs with circumscription, for local closed-world reasoning in the matchmaking context. An elaborate example of an electronic marketplace for PC product catalogs from the e-commerce domain demonstrates how these formalisms can be used to realize such scenarios

    Towards a belief revision based adaptive and context sensitive information retrieval system

    Get PDF
    In an adaptive information retrieval (IR) setting, the information seekers' beliefs about which terms are relevant or nonrelevant will naturally fluctuate. This article investigates how the theory of belief revision can be used to model adaptive IR. More specifically, belief revision logic provides a rich representation scheme to formalize retrieval contexts so as to disambiguate vague user queries. In addition, belief revision theory underpins the development of an effective mechanism to revise user profiles in accordance with information seekers' changing information needs. It is argued that information retrieval contexts can be extracted by means of the information-flow text mining method so as to realize a highly autonomous adaptive IR system. The extra bonus of a belief-based IR model is that its retrieval behavior is more predictable and explanatory. Our initial experiments show that the belief-based adaptive IR system is as effective as a classical adaptive IR system. To our best knowledge, this is the first successful implementation and evaluation of a logic-based adaptive IR model which can efficiently process large IR collections

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    A methodology for the selection of a paradigm of reasoning under uncertainty in expert system development

    Get PDF
    The aim of this thesis is to develop a methodology for the selection of a paradigm of reasoning under uncertainty for the expert system developer. This is important since practical information on how to select a paradigm of reasoning under uncertainty is not generally available. The thesis explores the role of uncertainty in an expert system and considers the process of reasoning under uncertainty. The possible sources of uncertainty are investigated and prove to be crucial to some aspects of the methodology. A variety of Uncertainty Management Techniques (UMTs) are considered, including numeric, symbolic and hybrid methods. Considerably more information is found in the literature on numeric methods, than the latter two. Methods that have been proposed for comparing UMTs are studied and comparisons reported in the literature are summarised. Again this concentrates on numeric methods, since there is more literature available. The requirements of a methodology for the selection of a UMT are considered. A manual approach to the selection process is developed. The possibility of extending the boundaries of knowledge stored in the expert system by including meta-data to describe the handling of uncertainty in an expert system is then considered. This is followed by suggestions taken from the literature for automating the process of selection. Finally consideration is given to whether the objectives of the research have been met and recommendations are made for the next stage in researching a methodology for the selection of a paradigm of reasoning under uncertainty in expert system development

    Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective

    Full text link
    Data-driven decision making is becoming an integral part of manufacturing companies. Data is collected and commonly used to improve efficiency and produce high quality items for the customers. IoT-based and other forms of object tracking are an emerging tool for collecting movement data of objects/entities (e.g. human workers, moving vehicles, trolleys etc.) over space and time. Movement data can provide valuable insights like process bottlenecks, resource utilization, effective working time etc. that can be used for decision making and improving efficiency. Turning movement data into valuable information for industrial management and decision making requires analysis methods. We refer to this process as movement analytics. The purpose of this document is to review the current state of work for movement analytics both in manufacturing and more broadly. We survey relevant work from both a theoretical perspective and an application perspective. From the theoretical perspective, we put an emphasis on useful methods from two research areas: machine learning, and logic-based knowledge representation. We also review their combinations in view of movement analytics, and we discuss promising areas for future development and application. Furthermore, we touch on constraint optimization. From an application perspective, we review applications of these methods to movement analytics in a general sense and across various industries. We also describe currently available commercial off-the-shelf products for tracking in manufacturing, and we overview main concepts of digital twins and their applications

    Probabilistic Models for Scalable Knowledge Graph Construction

    Get PDF
    In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph

    Query Answering in Probabilistic Data and Knowledge Bases

    Get PDF
    Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory

    SOWL QL: Querying Spatio - Temporal Ontologies in OWL

    Get PDF
    We introduce SOWL QL, a query language for spatio-temporal information in ontologies. Buildingupon SOWL (Spatio-Temporal OWL), an ontology for handling spatio-temporal information in OWL, SOWL QL supports querying over qualitative spatio-temporal information (expressed using natural language expressions such as “before”, “after”, “north of”, “south of”) rather than merely quantitative information (exact dates, times, locations). SOWL QL extends SPARQL with a powerful set of temporal and spatial operators, including temporal Allen topological, spatial directional and topological operations or combinations of the above. SOWL QL maintains simplicity of expression and also, upward and downward compatibility with SPARQL. Query translation in SOWL QL yields SPARQL queries implying that, querying spatio-temporal ontologies using SPARQL is still feasible but suffers from several drawbacks the most important of them being that, queries in SPARQL become particularly complicated and users must be familiar with the underlying spatio-temporal representation (the “N-ary relations” or the “4D-fluents” approach in this work). Finally, querying in SOWL QL is supported by the SOWL reasoner which is not part of the standard SPARQL translation. The run-time performance of SOWL QL has been assessed experimentally in a real data setting. A critical analysis of its performance is also presented
    • …
    corecore