4,260 research outputs found

    Developments in abstract and assumption-based argumentation and their application in logic programming

    Get PDF
    Logic Programming (LP) and Argumentation are two paradigms for knowledge representation and reasoning under incomplete information. Even though the two paradigms share common features, they constitute mostly separate areas of research. In this thesis, we present novel developments in Argumentation, in particular in Assumption-Based Argumentation (ABA) and Abstract Argumentation (AA), and show how they can 1) extend the understanding of the relationship between the two paradigms and 2) provide solutions to problematic reasoning outcomes in LP. More precisely, we introduce assumption labellings as a novel way to express the semantics of ABA and prove a more straightforward relationship with LP semantics than found in previous work. Building upon these correspondence results, we apply methods for argument construction and conflict detection from ABA, and for conflict resolution from AA, to construct justifications of unexpected or unexplained LP solutions under the answer set semantics. We furthermore characterise reasons for the non-existence of stable semantics in AA and apply these findings to characterise different scenarios in which the computation of meaningful solutions in LP under the answer set semantics fails.Open Acces

    On SAT representations of XOR constraints

    Full text link
    We study the representation of systems S of linear equations over the two-element field (aka xor- or parity-constraints) via conjunctive normal forms F (boolean clause-sets). First we consider the problem of finding an "arc-consistent" representation ("AC"), meaning that unit-clause propagation will fix all forced assignments for all possible instantiations of the xor-variables. Our main negative result is that there is no polysize AC-representation in general. On the positive side we show that finding such an AC-representation is fixed-parameter tractable (fpt) in the number of equations. Then we turn to a stronger criterion of representation, namely propagation completeness ("PC") --- while AC only covers the variables of S, now all the variables in F (the variables in S plus auxiliary variables) are considered for PC. We show that the standard translation actually yields a PC representation for one equation, but fails so for two equations (in fact arbitrarily badly). We show that with a more intelligent translation we can also easily compute a translation to PC for two equations. We conjecture that computing a representation in PC is fpt in the number of equations.Comment: 39 pages; 2nd v. improved handling of acyclic systems, free-standing proof of the transformation from AC-representations to monotone circuits, improved wording and literature review; 3rd v. updated literature, strengthened treatment of monotonisation, improved discussions; 4th v. update of literature, discussions and formulations, more details and examples; conference v. to appear LATA 201

    Modular Logic Programming: Full Compositionality and Conflict Handling for Practical Reasoning

    Get PDF
    With the recent development of a new ubiquitous nature of data and the profusity of available knowledge, there is nowadays the need to reason from multiple sources of often incomplete and uncertain knowledge. Our goal was to provide a way to combine declarative knowledge bases – represented as logic programming modules under the answer set semantics – as well as the individual results one already inferred from them, without having to recalculate the results for their composition and without having to explicitly know the original logic programming encodings that produced such results. This posed us many challenges such as how to deal with fundamental problems of modular frameworks for logic programming, namely how to define a general compositional semantics that allows us to compose unrestricted modules. Building upon existing logic programming approaches, we devised a framework capable of composing generic logic programming modules while preserving the crucial property of compositionality, which informally means that the combination of models of individual modules are the models of the union of modules. We are also still able to reason in the presence of knowledge containing incoherencies, which is informally characterised by a logic program that does not have an answer set due to cyclic dependencies of an atom from its default negation. In this thesis we also discuss how the same approach can be extended to deal with probabilistic knowledge in a modular and compositional way. We depart from the Modular Logic Programming approach in Oikarinen & Janhunen (2008); Janhunen et al. (2009) which achieved a restricted form of compositionality of answer set programming modules. We aim at generalising this framework of modular logic programming and start by lifting restrictive conditions that were originally imposed, and use alternative ways of combining these (so called by us) Generalised Modular Logic Programs. We then deal with conflicts arising in generalised modular logic programming and provide modular justifications and debugging for the generalised modular logic programming setting, where justification models answer the question: Why is a given interpretation indeed an Answer Set? and Debugging models answer the question: Why is a given interpretation not an Answer Set? In summary, our research deals with the problematic of formally devising a generic modular logic programming framework, providing: operators for combining arbitrary modular logic programs together with a compositional semantics; We characterise conflicts that occur when composing access control policies, which are generalisable to our context of generalised modular logic programming, and ways of dealing with them syntactically: provided a unification for justification and debugging of logic programs; and semantically: provide a new semantics capable of dealing with incoherences. We also provide an extension of modular logic programming to a probabilistic setting. These goals are already covered with published work. A prototypical tool implementing the unification of justifications and debugging is available for download from http://cptkirk.sourceforge.net

    On the responsibility for undecisiveness in preferred and stable labellings in abstract argumentation

    Get PDF
    Different semantics of abstract Argumentation Frameworks (AFs) provide different levels of decisiveness for reasoning about the acceptability of conflicting arguments. The stable semantics is useful for applications requiring a high level of decisiveness, as it assigns to each argument the label “accepted” or the label “rejected”. Unfortunately, stable labellings are not guaranteed to exist, thus raising the question as to which parts of AFs are responsible for the non-existence. In this paper, we address this question by investigating a more general question concerning preferred labellings (which may be less decisive than stable labellings but are always guaranteed to exist), namely why a given preferred labelling may not be stable and thus undecided on some arguments. In particular, (1) we give various characterisations of parts of an AF, based on the given preferred labelling, and (2) we show that these parts are indeed responsible for the undecisiveness if the preferred labelling is not stable. We then use these characterisations to explain the non-existence of stable labellings. We present two types of characterisations, based on labellings that are more (or equally) committed than the given preferred labelling on the one hand, and based on the structure of the given AF on the other, and compare the respective AF parts deemed responsible. To prove that our characterisations indeed yield responsible parts, we use a notion of enforcement of labels through structural revision, by means of which the preferred labelling of the given AF can be turned into a stable labelling of the structurally revised AF. Rather than prescribing how this structural revision is carried out, we focus on the enforcement of labels and leave the engineering of the revision open to fulfil differing requirements of applications and information available to users

    On the Consistent Histories Approach to Quantum Mechanics

    Full text link
    We review the consistent histories formulations of quantum mechanics developed by Griffiths, Omn\`es and Gell-Mann and Hartle, and describe the classification of consistent sets. We illustrate some general features of consistent sets by a few simple lemmas and examples. We consider various interpretations of the formalism, and examine the new problems which arise in reconstructing the past and predicting the future. It is shown that Omn\`es' characterisation of true statements --- statements which can be deduced unconditionally in his interpretation --- is incorrect. We examine critically Gell-Mann and Hartle's interpretation of the formalism, and in particular their discussions of communication, prediction and retrodiction, and conclude that their explanation of the apparent persistence of quasiclassicality relies on assumptions about an as yet unknown theory of experience. Our overall conclusion is that the consistent histories approach illustrates the need to supplement quantum mechanics by some selection principle in order to produce a fundamental theory capable of unconditional predictions.Comment: Published version, to appear in J. Stat. Phys. in early 1996. The main arguments and conclusions remain unaltered, but there are significant revisions from the earlier archive version. These include a new subsection on interpretations of the formalism, other additions clarifying various arguments in response to comments, and some minor corrections. (87 pages, TeX with harvmac.

    Resurgence of the Endogeneity-Backed Instrumental Variable Methods

    Get PDF
    This paper investigates the nature of the IV method for tackling endogeneity. By tracing the rise and fall of the method in macroeconometrics and its subsequent revival in microeconometrics, it pins the method down to an implicit model respecification device—breaking the circular causality of simultaneous relations by redefining it as an asymmetric one conditioning on a non-optimal conditional expectation of the assumed endogenous explanatory variable, thus rejecting that variable as a valid conditional variable. The revealed nature explains why the IV route is popular for models where endogeneity is superfluous whereas measurement errors are of the key concern
    • …
    corecore