177,231 research outputs found

    TR-2010014: A Complexity Question in Justification Logic

    Full text link

    NEXP-completeness and Universal Hardness Results for Justification Logic

    Full text link
    We provide a lower complexity bound for the satisfiability problem of a multi-agent justification logic, establishing that the general NEXP upper bound from our previous work is tight. We then use a simple modification of the corresponding reduction to prove that satisfiability for all multi-agent justification logics from there is hard for the Sigma 2 p class of the second level of the polynomial hierarchy - given certain reasonable conditions. Our methods improve on these required conditions for the same lower bound for the single-agent justification logics, proven by Buss and Kuznets in 2009, thus answering one of their open questions.Comment: Shorter version has been accepted for publication by CSR 201

    Complexity Jumps In Multiagent Justification Logic Under Interacting Justifications

    Full text link
    The Logic of Proofs, LP, and its successor, Justification Logic, is a refinement of the modal logic approach to epistemology in which proofs/justifications are taken into account. In 2000 Kuznets showed that satisfiability for LP is in the second level of the polynomial hierarchy, a result which has been successfully repeated for all other one-agent justification logics whose complexity is known. We introduce a family of multi-agent justification logics with interactions between the agents' justifications, by extending and generalizing the two-agent versions of the Logic of Proofs introduced by Yavorskaya in 2008. Known concepts and tools from the single-agent justification setting are adjusted for this multiple agent case. We present tableau rules and some preliminary complexity results. In several cases the satisfiability problem for these logics remains in the second level of the polynomial hierarchy, while for others it is PSPACE or EXP-hard. Furthermore, this problem becomes PSPACE-hard even for certain two-agent logics, while there are EXP-hard logics of three agents

    Complexity of Non-Monotonic Logics

    Full text link
    Over the past few decades, non-monotonic reasoning has developed to be one of the most important topics in computational logic and artificial intelligence. Different ways to introduce non-monotonic aspects to classical logic have been considered, e.g., extension with default rules, extension with modal belief operators, or modification of the semantics. In this survey we consider a logical formalism from each of the above possibilities, namely Reiter's default logic, Moore's autoepistemic logic and McCarthy's circumscription. Additionally, we consider abduction, where one is not interested in inferences from a given knowledge base but in computing possible explanations for an observation with respect to a given knowledge base. Complexity results for different reasoning tasks for propositional variants of these logics have been studied already in the nineties. In recent years, however, a renewed interest in complexity issues can be observed. One current focal approach is to consider parameterized problems and identify reasonable parameters that allow for FPT algorithms. In another approach, the emphasis lies on identifying fragments, i.e., restriction of the logical language, that allow more efficient algorithms for the most important reasoning tasks. In this survey we focus on this second aspect. We describe complexity results for fragments of logical languages obtained by either restricting the allowed set of operators (e.g., forbidding negations one might consider only monotone formulae) or by considering only formulae in conjunctive normal form but with generalized clause types. The algorithmic problems we consider are suitable variants of satisfiability and implication in each of the logics, but also counting problems, where one is not only interested in the existence of certain objects (e.g., models of a formula) but asks for their number.Comment: To appear in Bulletin of the EATC

    Contextuality and Information Systems: how the interplay between paradigms can help

    Get PDF
    Through this paper, we theorize on the meanings and roles of context in the study of information systems. The literatures of information systems and information science both explicitly conceptualize information systems (and there are multiple overlapping definitions). These literatures also grapple with the situated and generalizable natures of an information system. Given these shared interests and common concerns, this paper is used as a vehicle to explore the roles of context and suggests how multi-paradigmatic research ??? another shared feature of both information science and information systems scholarship ??? provides a means to carry forward more fruitful studies of information systems. We discuss the processes of reconstructed logic and logic-in-use in terms of studying information systems. We argue that what goes on in the practice of researchers, or the logic-in-practice, is typified by what we are calling the contextuality problem. In response, we envision a reconstructed logic, which is an idealization of academic practices regarding context. The logic-in-use of the field is then further explained based on two different views on context. The paper concludes by proposing a model for improving the logic-in-use for the study of information systems

    Computational reverse mathematics and foundational analysis

    Get PDF
    Reverse mathematics studies which subsystems of second order arithmetic are equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main philosophical application of reverse mathematics proposed thus far is foundational analysis, which explores the limits of different foundations for mathematics in a formally precise manner. This paper gives a detailed account of the motivations and methodology of foundational analysis, which have heretofore been largely left implicit in the practice. It then shows how this account can be fruitfully applied in the evaluation of major foundational approaches by a careful examination of two case studies: a partial realization of Hilbert's program due to Simpson [1988], and predicativism in the extended form due to Feferman and Sch\"{u}tte. Shore [2010, 2013] proposes that equivalences in reverse mathematics be proved in the same way as inequivalences, namely by considering only ω\omega-models of the systems in question. Shore refers to this approach as computational reverse mathematics. This paper shows that despite some attractive features, computational reverse mathematics is inappropriate for foundational analysis, for two major reasons. Firstly, the computable entailment relation employed in computational reverse mathematics does not preserve justification for the foundational programs above. Secondly, computable entailment is a Π11\Pi^1_1 complete relation, and hence employing it commits one to theoretical resources which outstrip those available within any foundational approach that is proof-theoretically weaker than Π11-CA0\Pi^1_1\text{-}\mathsf{CA}_0.Comment: Submitted. 41 page

    Epistemic virtues, metavirtues, and computational complexity

    Get PDF
    I argue that considerations about computational complexity show that all finite agents need characteristics like those that have been called epistemic virtues. The necessity of these virtues follows in part from the nonexistence of shortcuts, or efficient ways of finding shortcuts, to cognitively expensive routines. It follows that agents must possess the capacities – metavirtues –of developing in advance the cognitive virtues they will need when time and memory are at a premium
    • …
    corecore