219 research outputs found

    An Event Structure Model for Probabilistic Concurrent Kleene Algebra

    Full text link
    We give a new true-concurrent model for probabilistic concurrent Kleene algebra. The model is based on probabilistic event structures, which combines ideas from Katoen's work on probabilistic concurrency and Varacca's probabilistic prime event structures. The event structures are compared with a true-concurrent version of Segala's probabilistic simulation. Finally, the algebraic properties of the model are summarised to the extent that they can be used to derive techniques such as probabilistic rely/guarantee inference rules.Comment: Submitted and accepted for LPAR19 (2013

    The Interval Domain: A Matchmaker for aCTL and aPCTL

    Get PDF
    AbstractWe present aPCTL, a version of PCTL with an action-based semantics which coincides with the ordinary PCTL in case of a sole action type. We point out what aspects of aPCTL may be improved for its application as a probabilistic logic in a tool modeling large probabilistic system. We give a non-standard semantics to the action-based temporal logical aCTL, where the propositional clauses are interpreted in a fuzzy and the modalities in a probabilistic way; the until-construct is evaluated as a least fixed-point over these meanings. We view aCTL formulas ⊘ as templates for aPCTL formulas (which still need vectors of thresholds as annotations for all subformulas which are path formulas). Since [⊘]s, our non-standard meaning of ø at state s, is an interval [a, b], we may craft aPCTL formulas ø from using the information a and b respectively. This results in two aPCTL formulas ø and ø1. This translation defines a critical region of such thresholds for ⊘ in the following sense: if a > 0 then a satisfies the aPCTL formula ø1 dually, if b < 1 then s does not satisfy the formula ø1. Thus, any interesting probabilistic dynamics of aPCTL formulas with “pattern” ⊘ has to happen within the n-dimensional interval determined by out non-standard aCTL semantics [⊘].we would like to thank Martín Hötzel Escardó for suggesting to look at the interval domain at the LICS'97 meeting in Warsaw. He also pointed to work in his PhD thesis about the universality of I. we also acknowledge Marta Kwaitkowska, Christel Baier, Rance Cleaveland, and Scott Smolka for fruitful discussion on this subject matter

    The Mode of Computing

    Full text link
    The Turing Machine is the paradigmatic case of computing machines, but there are others, such as Artificial Neural Networks, Table Computing, Relational-Indeterminate Computing and diverse forms of analogical computing, each of which based on a particular underlying intuition of the phenomenon of computing. This variety can be captured in terms of system levels, re-interpreting and generalizing Newell's hierarchy, which includes the knowledge level at the top and the symbol level immediately below it. In this re-interpretation the knowledge level consists of human knowledge and the symbol level is generalized into a new level that here is called The Mode of Computing. Natural computing performed by the brains of humans and non-human animals with a developed enough neural system should be understood in terms of a hierarchy of system levels too. By analogy from standard computing machinery there must be a system level above the neural circuitry levels and directly below the knowledge level that is named here The mode of Natural Computing. A central question for Cognition is the characterization of this mode. The Mode of Computing provides a novel perspective on the phenomena of computing, interpreting, the representational and non-representational views of cognition, and consciousness.Comment: 35 pages, 8 figure

    How we designed winning algorithms for abstract argumentation and which insight we attained

    Get PDF
    In this paper we illustrate the design choices that led to the development of ArgSemSAT, the winner of the preferred semantics track at the 2017 International Competition on Computational Models of Arguments (ICCMA 2017), a biennial contest on problems associated to the Dung’s model of abstract argumentation frameworks, widely recognised as a fundamental reference in computational argumentation. The algorithms of ArgSemSAT are based on multiple calls to a SAT solver to compute complete labellings, and on encoding constraints to drive the search towards the solution of decision and enumeration problems. In this paper we focus on preferred semantics (and incidentally stable as well), one of the most popular and complex semantics for identifying acceptable arguments. We discuss our design methodology that includes a systematic exploration and empirical evaluation of labelling encodings, algorithmic variations and SAT solver choices. In designing the successful ArgSemSAT, we discover that: (1) there is a labelling encoding that appears to be universally better than other, logically equivalent ones; (2) composition of different techniques such as AllSAT and enumerating stable extensions when searching for preferred semantics brings advantages; (3) injecting domain specific knowledge in the algorithm design can lead to significant improvements

    Is logic empirical?

    No full text
    The philosophical debate about quantum logic between the late 1960s and the early 1980s was generated mainly by Putnam's claims that quantum mechanics empirically motivates introducing a new form of logic, that such an empirically founded quantum logic is the 'true' logic, and that adopting quantum logic would resolve all the paradoxes of quantum mechanics. Most of that debate focussed on the latter claim, reaching the conclusion that it was mistaken. This chapter will attempt to clarify the possible misunderstandings surrounding the more radical claims about the revision of logic, assessing them in particular both in the context of more general quantum-like theories (in the framework of von Neumann algebras), and against the background of the current state of play in the philosophy and interpretation of quantum mechanics. Characteristically, the conclusions that might be drawn depend crucially on which of the currently proposed solutions to the measurement problem is adopted

    A new simulation algorithm for multienvironment probabilistic P systems

    Get PDF
    Multienvironment P systems are the base of a general framework for modeling ecosystems dynamics. On one hand, this modeling framework represents the structural and dynamical aspects of real ecosystems in a discrete, modular and compressive way. On the other hand, the inherent randomness and uncertainty of biological systems are captured by using probabilistic strategies. Nowadays, the simulation of these P systems based models is fundamental for experimentation and validation. In this paper, we introduce a new simulation algorithm, called DNDP, which performs object distribution and maximal consistency in the application of rules, that are crucial aspects of these systems. The paper also depicts a parallel implementation of the algorithm, and a comparison with the existing algorithm in PLinguaCore is provided. In order to test the performance of the presented algorithm, several experiments (simulations) have been carried out over four simple P systems with the same skeleton and different number of environments.Ministerio de Ciencia e Innovación TIN2009–13192Junta de Andalucía P08–TIC-0420

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Quantities in Games and Modal Transition Systems

    Get PDF
    corecore