357,426 research outputs found
Logic of Non-Monotonic Interactive Proofs (Formal Theory of Temporary Knowledge Transfer)
We propose a monotonic logic of internalised non-monotonic or instant
interactive proofs (LiiP) and reconstruct an existing monotonic logic of
internalised monotonic or persistent interactive proofs (LiP) as a minimal
conservative extension of LiiP. Instant interactive proofs effect a fragile
epistemic impact in their intended communities of peer reviewers that consists
in the impermanent induction of the knowledge of their proof goal by means of
the knowledge of the proof with the interpreting reviewer: If my peer reviewer
knew my proof then she would at least then (in that instant) know that its
proof goal is true. Their impact is fragile and their induction of knowledge
impermanent in the sense of being the case possibly only at the instant of
learning the proof. This accounts for the important possibility of
internalising proofs of statements whose truth value can vary, which, as
opposed to invariant statements, cannot have persistent proofs. So instant
interactive proofs effect a temporary transfer of certain propositional
knowledge (knowable ephemeral facts) via the transmission of certain individual
knowledge (knowable non-monotonic proofs) in distributed systems of multiple
interacting agents.Comment: continuation of arXiv:1201.3667 ; published extended abstract:
DOI:10.1007/978-3-642-36039-8_16 ; related to arXiv:1208.591
Emerging from the MIST: A Connector Tool for Supporting Programming by Non-programmers
Software development is an iterative process. As user re-quirements emerge software applications must be extended to support the new requirements. Typically, a programmer will add new code to an existing code base of an application to provide a new functionality. Previous research has shown that such extensions are easier when application logic is clearly separated from the user interface logic. Assuming that a programmer is already familiar with the existing code base, the task of writing the new code can be considered to be split into two sub-tasks: writing code for the application logic; that is, the actual functionality of the application; and writing code for the user interface that will expose the functionality to the end user.
The goal of this research is to reduce the effort required to create a user interface once the application logic has been created, toward supporting scientists with minimal pro-gramming knowledge to be able to create and modify pro-grams. Using a Model View Controller based architecture, various model components which contain the application logic can be built and extended. The process of creating and extending the views (user interfaces) on these model components is simplified through the use of our Malleable Interactive Software Toolkit (MIST), a tool set an infrastructure intended to simplify the design and extension of dynamically reconfigurable interfaces.
This paper focuses on one tool in the MIST suite, a connec-tor tool that enables the programmer to evolve the user interface as the application logic evolves by connecting related pieces of code together; either through simple drag-and-drop interactions or through the authoring of Python code. The connector tool exemplifies the types of tools in the MIST suite, which we expect will encourage collabora-tive development of applications by allowing users to inte-grate various components and minimizing the cost of de-veloping new user interfaces for the combined compo-nents
On the Complexity of Axiom Pinpointing in Description Logics
We investigate the computational complexity of axiom pinpointing in Description Logics, which is the task of finding minimal subsets of a knowledge base that have a given consequence. We consider the problems of enumerating such subsets with and without order, and show hardness results that already hold for the propositional Horn fragment, or for the Description Logic EL. We show complexity results for several other related decision and enumeration problems for these fragments that extend to more expressive logics. In particular we show that hardness of these problems depends not only on expressivity of the fragment but also on the shape of the axioms used
Understanding Inconsistency -- A Contribution to the Field of Non-monotonic Reasoning
Conflicting information in an agent's knowledge base may lead to a semantical defect, that is, a situation where it is impossible to draw any plausible conclusion. Finding out the reasons for the observed inconsistency and restoring consistency in a certain minimal way are frequently occurring issues in the research area of knowledge representation and reasoning. In a seminal paper Raymond Reiter proves a duality between maximal consistent subsets of a propositional knowledge base and minimal hitting sets of each minimal conflict -- the famous hitting set duality. We extend Reiter's result to arbitrary non-monotonic logics. To this end, we develop a refined notion of inconsistency, called strong inconsistency. We show that minimal strongly inconsistent subsets play a similar role as minimal inconsistent subsets in propositional logic. In particular, the duality between hitting sets of minimal inconsistent subsets and maximal consistent subsets generalizes to arbitrary logics if the stronger notion of inconsistency is used. We cover various notions of repairs and characterize them using analogous hitting set dualities. Our analysis also includes an investigation of structural properties of knowledge bases with respect to our notions.
Minimal inconsistent subsets of knowledge bases in monotonic logics play an important role when investigating the reasons for conflicts and trying to handle them, but also for inconsistency measurement. Our notion of strong inconsistency thus allows us to extend existing results to non-monotonic logics. While measuring inconsistency in propositional logic has been investigated for some time now, taking the non-monotony into account poses new challenges. In order to tackle them, we focus on the structure of minimal strongly inconsistent subsets of a knowledge base. We propose measures based on this notion and investigate their behavior in a non-monotonic setting by revisiting existing rationality postulates, and analyzing the compliance of the proposed measures with these postulates.
We provide a series of first results in the context of inconsistency in abstract argumentation theory regarding the two most important reasoning modes, namely credulous as well as skeptical acceptance. Our analysis includes the following problems regarding minimal repairs: existence, verification, computation of one and characterization of all solutions. The latter will be tackled with our previously obtained duality results.
Finally, we investigate the complexity of various related reasoning problems and compare our results to existing ones for monotonic logics
Localising iceberg inconsistencies
In artificial intelligence, it is important to handle and analyse inconsistency in knowledge bases. Inconsistent pieces of information suggest questions like “where is the inconsistency?” and “how severe is it?”. Inconsistency measures have been proposed to tackle the latter issue, but the former seems underdeveloped and is the focus of this paper. Minimal inconsistent sets have been the main tool to localise inconsistency, but we argue that they are like the exposed part of an iceberg, failing to capture contradictions hidden under the water. Using classical propositional logic, we develop methods to characterise when a formula is contributing to the inconsistency in a knowledge base and when a set of formulas can be regarded as a primitive conflict. To achieve this, we employ an abstract consequence operation to “look beneath the water level”, generalising the minimal inconsistent set concept and the related free formula notion. We apply the framework presented to the problem of measuring inconsistency in knowledge bases, putting forward relaxed forms for two debatable postulates for inconsistency measures. Finally, we discuss the computational complexity issues related to the introduced concepts
Super Logic Programs
The Autoepistemic Logic of Knowledge and Belief (AELB) is a powerful
nonmonotic formalism introduced by Teodor Przymusinski in 1994. In this paper,
we specialize it to a class of theories called `super logic programs'. We argue
that these programs form a natural generalization of standard logic programs.
In particular, they allow disjunctions and default negation of arbibrary
positive objective formulas.
Our main results are two new and powerful characterizations of the static
semant ics of these programs, one syntactic, and one model-theoretic. The
syntactic fixed point characterization is much simpler than the fixed point
construction of the static semantics for arbitrary AELB theories. The
model-theoretic characterization via Kripke models allows one to construct
finite representations of the inherently infinite static expansions.
Both characterizations can be used as the basis of algorithms for query
answering under the static semantics. We describe a query-answering interpreter
for super programs which we developed based on the model-theoretic
characterization and which is available on the web.Comment: 47 pages, revised version of the paper submitted 10/200
A Description Logic Framework for Commonsense Conceptual Combination Integrating Typicality, Probabilities and Cognitive Heuristics
We propose a nonmonotonic Description Logic of typicality able to account for
the phenomenon of concept combination of prototypical concepts. The proposed
logic relies on the logic of typicality ALC TR, whose semantics is based on the
notion of rational closure, as well as on the distributed semantics of
probabilistic Description Logics, and is equipped with a cognitive heuristic
used by humans for concept composition. We first extend the logic of typicality
ALC TR by typicality inclusions whose intuitive meaning is that "there is
probability p about the fact that typical Cs are Ds". As in the distributed
semantics, we define different scenarios containing only some typicality
inclusions, each one having a suitable probability. We then focus on those
scenarios whose probabilities belong to a given and fixed range, and we exploit
such scenarios in order to ascribe typical properties to a concept C obtained
as the combination of two prototypical concepts. We also show that reasoning in
the proposed Description Logic is EXPTIME-complete as for the underlying ALC.Comment: 39 pages, 3 figure
- …