155,118 research outputs found

    A cookbook for temporal conceptual data modelling with description logic

    Get PDF
    We design temporal description logics suitable for reasoning about temporal conceptual data models and investigate their computational complexity. Our formalisms are based on DL-Lite logics with three types of concept inclusions (ranging from atomic concept inclusions and disjointness to the full Booleans), as well as cardinality constraints and role inclusions. In the temporal dimension, they capture future and past temporal operators on concepts, flexible and rigid roles, the operators `always' and `some time' on roles, data assertions for particular moments of time and global concept inclusions. The logics are interpreted over the Cartesian products of object domains and the flow of time (Z,<), satisfying the constant domain assumption. We prove that the most expressive of our temporal description logics (which can capture lifespan cardinalities and either qualitative or quantitative evolution constraints) turn out to be undecidable. However, by omitting some of the temporal operators on concepts/roles or by restricting the form of concept inclusions we obtain logics whose complexity ranges between PSpace and NLogSpace. These positive results were obtained by reduction to various clausal fragments of propositional temporal logic, which opens a way to employ propositional or first-order temporal provers for reasoning about temporal data models

    Time-Aware Probabilistic Knowledge Graphs

    Get PDF
    The emergence of open information extraction as a tool for constructing and expanding knowledge graphs has aided the growth of temporal data, for instance, YAGO, NELL and Wikidata. While YAGO and Wikidata maintain the valid time of facts, NELL records the time point at which a fact is retrieved from some Web corpora. Collectively, these knowledge graphs (KG) store facts extracted from Wikipedia and other sources. Due to the imprecise nature of the extraction tools that are used to build and expand KG, such as NELL, the facts in the KG are weighted (a confidence value representing the correctness of a fact). Additionally, NELL can be considered as a transaction time KG because every fact is associated with extraction date. On the other hand, YAGO and Wikidata use the valid time model because they maintain facts together with their validity time (temporal scope). In this paper, we propose a bitemporal model (that combines transaction and valid time models) for maintaining and querying bitemporal probabilistic knowledge graphs. We study coalescing and scalability of marginal and MAP inference. Moreover, we show that complexity of reasoning tasks in atemporal probabilistic KG carry over to the bitemporal setting. Finally, we report our evaluation results of the proposed model

    Tailoring temporal description logics for reasoning over temporal conceptual models

    Get PDF
    Temporal data models have been used to describe how data can evolve in the context of temporal databases. Both the Extended Entity-Relationship (EER) model and the Unified Modelling Language (UML) have been temporally extended to design temporal databases. To automatically check quality properties of conceptual schemas various encoding to Description Logics (DLs) have been proposed in the literature. On the other hand, reasoning on temporally extended DLs turn out to be too complex for effective reasoning ranging from 2ExpTime up to undecidable languages. We propose here to temporalize the ā€˜light-weightā€™ DL-Lite logics obtaining nice computational results while still being able to represent various constraints of temporal conceptual models. In particular, we consider temporal extensions of DL-Lite^N_bool, which was shown to be adequate for capturing non-temporal conceptual models without relationship inclusion, and its fragment DL-Lite^N_core with most primitive concept inclusions, which are nevertheless enough to represent almost all types of atemporal constraints (apart from covering)

    Ontology modelling methodology for temporal and interdependent applications

    Get PDF
    The increasing adoption of Semantic Web technology by several classes of applications in recent years, has made ontology engineering a crucial part of application development. Nowadays, the abundant accessibility of interdependent information from multiple resources and representing various fields such as health, transport, and banking etc., further evidence the growing need for utilising ontology for the development of Web applications. While there have been several advances in the adoption of the ontology for application development, less emphasis is being made on the modelling methodologies for representing modern-day application that are characterised by the temporal nature of the data they process, which is captured from multiple sources. Taking into account the benefits of a methodology in the system development, we propose a novel methodology for modelling ontologies representing Context-Aware Temporal and Interdependent Systems (CATIS). CATIS is an ontology development methodology for modelling temporal interdependent applications in order to achieve the desired results when modelling sophisticated applications with temporal and inter dependent attributes to suit today's application requirements

    Enhanced tracking and recognition of moving objects by reasoning about spatio-temporal continuity.

    Get PDF
    A framework for the logical and statistical analysis and annotation of dynamic scenes containing occlusion and other uncertainties is presented. This framework consists of three elements; an object tracker module, an object recognition/classification module and a logical consistency, ambiguity and error reasoning engine. The principle behind the object tracker and object recognition modules is to reduce error by increasing ambiguity (by merging objects in close proximity and presenting multiple hypotheses). The reasoning engine deals with error, ambiguity and occlusion in a unified framework to produce a hypothesis that satisfies fundamental constraints on the spatio-temporal continuity of objects. Our algorithm finds a globally consistent model of an extended video sequence that is maximally supported by a voting function based on the output of a statistical classifier. The system results in an annotation that is significantly more accurate than what would be obtained by frame-by-frame evaluation of the classifier output. The framework has been implemented and applied successfully to the analysis of team sports with a single camera. Key words: Visua
    • ā€¦
    corecore