16,756 research outputs found

    On the Relative Expressiveness of Argumentation Frameworks, Normal Logic Programs and Abstract Dialectical Frameworks

    Full text link
    We analyse the expressiveness of the two-valued semantics of abstract argumentation frameworks, normal logic programs and abstract dialectical frameworks. By expressiveness we mean the ability to encode a desired set of two-valued interpretations over a given propositional signature using only atoms from that signature. While the computational complexity of the two-valued model existence problem for all these languages is (almost) the same, we show that the languages form a neat hierarchy with respect to their expressiveness.Comment: Proceedings of the 15th International Workshop on Non-Monotonic Reasoning (NMR 2014

    Expressiveness and Completeness in Abstraction

    Full text link
    We study two notions of expressiveness, which have appeared in abstraction theory for model checking, and find them incomparable in general. In particular, we show that according to the most widely used notion, the class of Kripke Modal Transition Systems is strictly less expressive than the class of Generalised Kripke Modal Transition Systems (a generalised variant of Kripke Modal Transition Systems equipped with hypertransitions). Furthermore, we investigate the ability of an abstraction framework to prove a formula with a finite abstract model, a property known as completeness. We address the issue of completeness from a general perspective: the way it depends on certain abstraction parameters, as well as its relationship with expressiveness.Comment: In Proceedings EXPRESS/SOS 2012, arXiv:1208.244

    Widely Linear Kernels for Complex-Valued Kernel Activation Functions

    Full text link
    Complex-valued neural networks (CVNNs) have been shown to be powerful nonlinear approximators when the input data can be properly modeled in the complex domain. One of the major challenges in scaling up CVNNs in practice is the design of complex activation functions. Recently, we proposed a novel framework for learning these activation functions neuron-wise in a data-dependent fashion, based on a cheap one-dimensional kernel expansion and the idea of kernel activation functions (KAFs). In this paper we argue that, despite its flexibility, this framework is still limited in the class of functions that can be modeled in the complex domain. We leverage the idea of widely linear complex kernels to extend the formulation, allowing for a richer expressiveness without an increase in the number of adaptable parameters. We test the resulting model on a set of complex-valued image classification benchmarks. Experimental results show that the resulting CVNNs can achieve higher accuracy while at the same time converging faster.Comment: Accepted at ICASSP 201

    A Theory of Sampling for Continuous-time Metric Temporal Logic

    Full text link
    This paper revisits the classical notion of sampling in the setting of real-time temporal logics for the modeling and analysis of systems. The relationship between the satisfiability of Metric Temporal Logic (MTL) formulas over continuous-time models and over discrete-time models is studied. It is shown to what extent discrete-time sequences obtained by sampling continuous-time signals capture the semantics of MTL formulas over the two time domains. The main results apply to "flat" formulas that do not nest temporal operators and can be applied to the problem of reducing the verification problem for MTL over continuous-time models to the same problem over discrete-time, resulting in an automated partial practically-efficient discretization technique.Comment: Revised version, 43 pages
    corecore