1,230 research outputs found

    Reasoning for the description logic ALC with link keys

    Get PDF
    Data interlinking is a critical task for widening and enhancing linked open data. One way to tackle data interlinking is to use link keys, which generalise keys to the case of two RDF datasets described using different ontologies. Link keys specify pairs of properties to compare for finding same-as links between instances of two classes of two different datasets. Hence, they can be used for finding links. Link keys can also be considered as logical axioms just like keys, ontologies and ontology alignments. We introduce the logic ALC+LK extending the description logic ALC with link keys. It may be used to reason and infer entailed link keys that may be more useful for a particular data interlinking task. We show that link key entailment can be reduced to consistency checking without introducing the negation of link keys. For deciding the consistency of an ALC+LK ontology, we introduce a new tableau-based algorithm. Contrary to the classical ones, the completion rules concerning link keys apply to pairs of individuals not directly related. We show that this algorithm is sound, complete and always terminates

    A New Rational Algorithm for View Updating in Relational Databases

    Full text link
    The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. In order to apply the rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented in this paper, along with the concept of a generalized revision algorithm for knowledge bases (Horn or Horn logic with stratified negation). We show that knowledge base dynamics has an interesting connection with kernel change via hitting set and abduction. In this paper, we show how techniques from disjunctive logic programming can be used for efficient (deductive) database updates. The key idea is to transform the given database together with the update request into a disjunctive (datalog) logic program and apply disjunctive techniques (such as minimal model reasoning) to solve the original update problem. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of a hitting set is carried out through a hyper tableaux calculus and magic set that is focused on the goal of minimality.Comment: arXiv admin note: substantial text overlap with arXiv:1301.515

    A Substructural Epistemic Resource Logic: Theory and Modelling Applications

    Full text link
    We present a substructural epistemic logic, based on Boolean BI, in which the epistemic modalities are parametrized on agents' local resources. The new modalities can be seen as generalizations of the usual epistemic modalities. The logic combines Boolean BI's resource semantics --- we introduce BI and its resource semantics at some length --- with epistemic agency. We illustrate the use of the logic in systems modelling by discussing some examples about access control, including semaphores, using resource tokens. We also give a labelled tableaux calculus and establish soundness and completeness with respect to the resource semantics

    Characterization of XML Functional Dependencies and their Interaction with DTDs

    Full text link
    With the rise of XML as a standard model of data exchange, XML functional dependencies (XFDs) have become important to areas such as key analysis, document normalization, and data integrity. XFDs are more complicated than relational functional dependencies because the set of XFDs satisfied by an XML document depends not only on the document values, but also the tree structure and corresponding DTD. In particular, constraints imposed by DTDs may alter the implications from a base set of XFDs, and may even be inconsistent with a set of XFDs. In this paper we examine the interaction between XFDs and DTDs. We present a sound and complete axiomatization for XFDs, both alone and in the presence of certain classes of DTDs. We show that these DTD classes form an axiomatic hierarchy, with the axioms at each level a proper superset of the previous. Furthermore, we show that consistency checking with respect to a set of XFDs is feasible for these same classes

    Design of a Graphics Interface for Linear Programming

    Get PDF
    Information Systems Working Papers Serie

    Extensible Knowledge Representation: the Case of Description Reasoners

    Full text link
    This paper offers an approach to extensible knowledge representation and reasoning for a family of formalisms known as Description Logics. The approach is based on the notion of adding new concept constructors, and includes a heuristic methodology for specifying the desired extensions, as well as a modularized software architecture that supports implementing extensions. The architecture detailed here falls in the normalize-compared paradigm, and supports both intentional reasoning (subsumption) involving concepts, and extensional reasoning involving individuals after incremental updates to the knowledge base. The resulting approach can be used to extend the reasoner with specialized notions that are motivated by specific problems or application areas, such as reasoning about dates, plans, etc. In addition, it provides an opportunity to implement constructors that are not currently yet sufficiently well understood theoretically, but are needed in practice. Also, for constructors that are provably hard to reason with (e.g., ones whose presence would lead to undecidability), it allows the implementation of incomplete reasoners where the incompleteness is tailored to be acceptable for the application at hand

    Method for the semantic indexing of concept hierarchies, uniform representation, use of relational database systems and generic and case-based reasoning

    Full text link
    This paper presents a method for semantic indexing and describes its application in the field of knowledge representation. Starting point of the semantic indexing is the knowledge represented by concept hierarchies. The goal is to assign keys to nodes (concepts) that are hierarchically ordered and syntactically and semantically correct. With the indexing algorithm, keys are computed such that concepts are partially unifiable with all more specific concepts and only semantically correct concepts are allowed to be added. The keys represent terminological relationships. Correctness and completeness of the underlying indexing algorithm are proven. The use of classical relational databases for the storage of instances is described. Because of the uniform representation, inference can be done using case-based reasoning and generic problem solving methods

    Dynamic Tableaux for Dynamic Modal Logics

    Get PDF
    In this dissertation we present proof systems for several modal logics. These proof systems are based on analytic (or semantic) tableaux. Modal logics are logics for reasoning about possibility, knowledge, beliefs, preferences, and other modalities. Their semantics are almost always based on Saul Kripke’s possible world semantics. In Kripke semantics, models are represented by relational structures or, equivalently, labeled graphs. Syntactic formulas that express statements about knowledge and other modalities are evaluated in terms of such models. This dissertation focuses on modal logics with dynamic operators for public announcements, belief revision, preference upgrades, and so on. These operators are defined in terms of mathematical operations on Kripke models. Thus, for example, a belief revision operator in the syntax would correspond to a belief revision operation on models. The ‘dynamic’ semantics of dynamic modal logics are a clever way of extending languages without compromising on intuitiveness. We present ‘dynamic’ tableau proof systems for these dynamic semantics, with the express aim to make them conceptually simple, easy to use, modular, and extensible. This we do by reflecting the semantics as closely as possible in the components of our tableau system. For instance, dynamic operations on Kripke models have counterpart dynamic relations between tableaux. Soundness, completeness, and decidability are three of the most important properties that a proof system may have. A proof system is sound if and only if any formula for which a proof exists, is true in every model. A proof system is complete if and only if for any formula that is true in all models, a proof exists. A proof system is decidable if and only if any formula can be proved to be a theorem or not a theorem in a finite number of steps. All proof systems in this dissertation are sound, complete, and decidable. Part of our strategy to create modular tableau systems is to delay concerns over decidability until after soundness and completeness have been established. Decidability is attained through the operations of folding and through operations on ‘tableau cascades’, which are graphs of tableaux. Finally, we provide a proof-of-concept implementation of our dynamic tableau system for public announcement logic in the Clojure programming language

    Forensic acquisition of file systems with parallel processing of digital artifacts to generate an early case assessment report

    Get PDF
    A evolução da maneira como os seres humanos interagem e realizam tarefas rotineiras mudou nas últimas décadas e uma longa lista de atividades agora somente são possíveis com o uso de tecnologias da informação – entre essas pode-se destacar a aquisição de bens e serviços, gestão e operações de negócios e comunicações. Essas transformações são visíveis também em outras atividades menos legítimas, permitindo que crimes sejam cometidos através de meios digitais. Em linhas gerais, investigadores forenses trabalham buscando por indícios de ações criminais realizadas por meio de dispositivos digitais para finalmente, tentar identificar os autores, o nível do dano causado e a história atrás que possibilitou o crime. Na sua essência, essa atividade deve seguir normas estritas para garantir que as provas sejam admitidas em tribunal, mas quanto maior o número de novos artefatos e maior o volume de dispositivos de armazenamento disponíveis, maior o tempo necessário entre a identificação de um dispositivo de um suspeito e o momento em que o investigador começa a navegar no mar de informações alojadas no dispositivo. Esta pesquisa, tem como objetivo antecipar algumas etapas do EDRM através do uso do processamento em paralelo adjacente nas unidades de processamento (CPU) atuais para para traduzir multiplos artefactos forenses do sistema operativo Windows 10 e gerar um relatório com as informações mais cruciais sobre o dispositivo adquirido. Permitindo uma análise antecipada do caso (ECA) ao mesmo tempo em que uma aquisição completa do disco está em curso, desse modo causando um impacto mínimo no tempo geral de aquisição
    • …
    corecore