31,193 research outputs found

    Strategic Argumentation is NP-Complete

    Full text link
    In this paper we study the complexity of strategic argumentation for dialogue games. A dialogue game is a 2-player game where the parties play arguments. We show how to model dialogue games in a skeptical, non-monotonic formalism, and we show that the problem of deciding what move (set of rules) to play at each turn is an NP-complete problem

    PSPACE Bounds for Rank-1 Modal Logics

    Get PDF
    For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step towards a general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a shallow model property and thus are, under mild assumptions on the format of their axiomatisation, in PSPACE. This leads to a unified derivation of tight PSPACE-bounds for a number of logics including K, KD, coalition logic, graded modal logic, majority logic, and probabilistic modal logic. Our generic algorithm moreover finds tableau proofs that witness pleasant proof-theoretic properties including a weak subformula property. This generality is made possible by a coalgebraic semantics, which conveniently abstracts from the details of a given model class and thus allows covering a broad range of logics in a uniform way

    Strong Normalization for HA + EM1 by Non-Deterministic Choice

    Full text link
    We study the strong normalization of a new Curry-Howard correspondence for HA + EM1, constructive Heyting Arithmetic with the excluded middle on Sigma01-formulas. The proof-term language of HA + EM1 consists in the lambda calculus plus an operator ||_a which represents, from the viewpoint of programming, an exception operator with a delimited scope, and from the viewpoint of logic, a restricted version of the excluded middle. We give a strong normalization proof for the system based on a technique of "non-deterministic immersion".Comment: In Proceedings COS 2013, arXiv:1309.092

    Backprop as Functor: A compositional perspective on supervised learning

    Full text link
    A supervised learning algorithm searches over a set of functions ABA \to B parametrised by a space PP to find the best approximation to some ideal function f ⁣:ABf\colon A \to B. It does this by taking examples (a,f(a))A×B(a,f(a)) \in A\times B, and updating the parameter according to some rule. We define a category where these update rules may be composed, and show that gradient descent---with respect to a fixed step size and an error function satisfying a certain property---defines a monoidal functor from a category of parametrised functions to this category of update rules. This provides a structural perspective on backpropagation, as well as a broad generalisation of neural networks.Comment: 13 pages + 4 page appendi
    corecore