9,507 research outputs found
Nonmonotonic consequences in default domain theory
Default domain theory is a framework for representing and reasoning about commonsense knowledge. Although this theory is motivated by ideas in Reiter’s work on default logic, it is in some sense a dual framework. We make Reiter’s default extension operator into a constructive method of building models, not theories. Domain theory, which is a well established tool for representing partial information in the semantics of programming languages, is adopted as the basis for constructing partial models. This paper considers some of the laws of nonmonotonic consequence, due to Gabbay and to Kraus, Lehmann, and Magidor, in the light of default domain theory. We remark that in some cases Gabbay’s law of cautious monotony is open to question. We consider an axiomatization of the nonmonotonic consequence relation on prime open sets in the Scott topology – the natural logic – of a domain, which omits this law. We prove a representation theorem showing that such relations are in one to one correspondence with the consequence relations determined by extensions in Scott domains augmented with default sets. This means that defaults are very expressive: they can, in a sense, represent any reasonable nonmonotonic entailment. Results about what kind of defaults determine cautious monotony are also discussed. In particular, we show that the property of unique extensions guarantees cautious monotony, and we give several classes of default structures which determine unique extensions.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/41772/1/10472_2004_Article_325432.pd
Complexity of Prioritized Default Logics
In default reasoning, usually not all possible ways of resolving conflicts
between default rules are acceptable. Criteria expressing acceptable ways of
resolving the conflicts may be hardwired in the inference mechanism, for
example specificity in inheritance reasoning can be handled this way, or they
may be given abstractly as an ordering on the default rules. In this article we
investigate formalizations of the latter approach in Reiter's default logic.
Our goal is to analyze and compare the computational properties of three such
formalizations in terms of their computational complexity: the prioritized
default logics of Baader and Hollunder, and Brewka, and a prioritized default
logic that is based on lexicographic comparison. The analysis locates the
propositional variants of these logics on the second and third levels of the
polynomial hierarchy, and identifies the boundary between tractable and
intractable inference for restricted classes of prioritized default theories
The Complexity of Reasoning for Fragments of Default Logic
Default logic was introduced by Reiter in 1980. In 1992, Gottlob classified
the complexity of the extension existence problem for propositional default
logic as \SigmaPtwo-complete, and the complexity of the credulous and
skeptical reasoning problem as SigmaP2-complete, resp. PiP2-complete.
Additionally, he investigated restrictions on the default rules, i.e.,
semi-normal default rules. Selman made in 1992 a similar approach with
disjunction-free and unary default rules. In this paper we systematically
restrict the set of allowed propositional connectives. We give a complete
complexity classification for all sets of Boolean functions in the meaning of
Post's lattice for all three common decision problems for propositional default
logic. We show that the complexity is a hexachotomy (SigmaP2-, DeltaP2-, NP-,
P-, NL-complete, trivial) for the extension existence problem, while for the
credulous and skeptical reasoning problem we obtain similar classifications
without trivial cases.Comment: Corrected versio
KR: An Architecture for Knowledge Representation and Reasoning in Robotics
This paper describes an architecture that combines the complementary
strengths of declarative programming and probabilistic graphical models to
enable robots to represent, reason with, and learn from, qualitative and
quantitative descriptions of uncertainty and knowledge. An action language is
used for the low-level (LL) and high-level (HL) system descriptions in the
architecture, and the definition of recorded histories in the HL is expanded to
allow prioritized defaults. For any given goal, tentative plans created in the
HL using default knowledge and commonsense reasoning are implemented in the LL
using probabilistic algorithms, with the corresponding observations used to
update the HL history. Tight coupling between the two levels enables automatic
selection of relevant variables and generation of suitable action policies in
the LL for each HL action, and supports reasoning with violation of defaults,
noisy observations and unreliable actions in large and complex domains. The
architecture is evaluated in simulation and on physical robots transporting
objects in indoor domains; the benefit on robots is a reduction in task
execution time of 39% compared with a purely probabilistic, but still
hierarchical, approach.Comment: The paper appears in the Proceedings of the 15th International
Workshop on Non-Monotonic Reasoning (NMR 2014
- …