184,926 research outputs found

    Implicit complexity for coinductive data: a characterization of corecurrence

    Full text link
    We propose a framework for reasoning about programs that manipulate coinductive data as well as inductive data. Our approach is based on using equational programs, which support a seamless combination of computation and reasoning, and using productivity (fairness) as the fundamental assertion, rather than bi-simulation. The latter is expressible in terms of the former. As an application to this framework, we give an implicit characterization of corecurrence: a function is definable using corecurrence iff its productivity is provable using coinduction for formulas in which data-predicates do not occur negatively. This is an analog, albeit in weaker form, of a characterization of recurrence (i.e. primitive recursion) in [Leivant, Unipolar induction, TCS 318, 2004].Comment: In Proceedings DICE 2011, arXiv:1201.034

    Equilibria, Fixed Points, and Complexity Classes

    Get PDF
    Many models from a variety of areas involve the computation of an equilibrium or fixed point of some kind. Examples include Nash equilibria in games; market equilibria; computing optimal strategies and the values of competitive games (stochastic and other games); stable configurations of neural networks; analysing basic stochastic models for evolution like branching processes and for language like stochastic context-free grammars; and models that incorporate the basic primitives of probability and recursion like recursive Markov chains. It is not known whether these problems can be solved in polynomial time. There are certain common computational principles underlying different types of equilibria, which are captured by the complexity classes PLS, PPAD, and FIXP. Representative complete problems for these classes are respectively, pure Nash equilibria in games where they are guaranteed to exist, (mixed) Nash equilibria in 2-player normal form games, and (mixed) Nash equilibria in normal form games with 3 (or more) players. This paper reviews the underlying computational principles and the corresponding classes

    Automating embedded analysis capabilities and managing software complexity in multiphysics simulation part I: template-based generic programming

    Full text link
    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering

    Primordial Evolution in the Finitary Process Soup

    Full text link
    A general and basic model of primordial evolution--a soup of reacting finitary and discrete processes--is employed to identify and analyze fundamental mechanisms that generate and maintain complex structures in prebiotic systems. The processes--ϵ\epsilon-machines as defined in computational mechanics--and their interaction networks both provide well defined notions of structure. This enables us to quantitatively demonstrate hierarchical self-organization in the soup in terms of complexity. We found that replicating processes evolve the strategy of successively building higher levels of organization by autocatalysis. Moreover, this is facilitated by local components that have low structural complexity, but high generality. In effect, the finitary process soup spontaneously evolves a selection pressure that favors such components. In light of the finitary process soup's generality, these results suggest a fundamental law of hierarchical systems: global complexity requires local simplicity.Comment: 7 pages, 10 figures; http://cse.ucdavis.edu/~cmg/compmech/pubs/pefps.ht

    The complexity of the list homomorphism problem for graphs

    Get PDF
    We completely classify the computational complexity of the list H-colouring problem for graphs (with possible loops) in combinatorial and algebraic terms: for every graph H the problem is either NP-complete, NL-complete, L-complete or is first-order definable; descriptive complexity equivalents are given as well via Datalog and its fragments. Our algebraic characterisations match important conjectures in the study of constraint satisfaction problems.Comment: 12 pages, STACS 201

    Computation with Advice

    Get PDF
    Computation with advice is suggested as generalization of both computation with discrete advice and Type-2 Nondeterminism. Several embodiments of the generic concept are discussed, and the close connection to Weihrauch reducibility is pointed out. As a novel concept, computability with random advice is studied; which corresponds to correct solutions being guessable with positive probability. In the framework of computation with advice, it is possible to define computational complexity for certain concepts of hypercomputation. Finally, some examples are given which illuminate the interplay of uniform and non-uniform techniques in order to investigate both computability with advice and the Weihrauch lattice

    Complexity of Non-Monotonic Logics

    Full text link
    Over the past few decades, non-monotonic reasoning has developed to be one of the most important topics in computational logic and artificial intelligence. Different ways to introduce non-monotonic aspects to classical logic have been considered, e.g., extension with default rules, extension with modal belief operators, or modification of the semantics. In this survey we consider a logical formalism from each of the above possibilities, namely Reiter's default logic, Moore's autoepistemic logic and McCarthy's circumscription. Additionally, we consider abduction, where one is not interested in inferences from a given knowledge base but in computing possible explanations for an observation with respect to a given knowledge base. Complexity results for different reasoning tasks for propositional variants of these logics have been studied already in the nineties. In recent years, however, a renewed interest in complexity issues can be observed. One current focal approach is to consider parameterized problems and identify reasonable parameters that allow for FPT algorithms. In another approach, the emphasis lies on identifying fragments, i.e., restriction of the logical language, that allow more efficient algorithms for the most important reasoning tasks. In this survey we focus on this second aspect. We describe complexity results for fragments of logical languages obtained by either restricting the allowed set of operators (e.g., forbidding negations one might consider only monotone formulae) or by considering only formulae in conjunctive normal form but with generalized clause types. The algorithmic problems we consider are suitable variants of satisfiability and implication in each of the logics, but also counting problems, where one is not only interested in the existence of certain objects (e.g., models of a formula) but asks for their number.Comment: To appear in Bulletin of the EATC

    Computational Complexity of Atomic Chemical Reaction Networks

    Full text link
    Informally, a chemical reaction network is "atomic" if each reaction may be interpreted as the rearrangement of indivisible units of matter. There are several reasonable definitions formalizing this idea. We investigate the computational complexity of deciding whether a given network is atomic according to each of these definitions. Our first definition, primitive atomic, which requires each reaction to preserve the total number of atoms, is to shown to be equivalent to mass conservation. Since it is known that it can be decided in polynomial time whether a given chemical reaction network is mass-conserving, the equivalence gives an efficient algorithm to decide primitive atomicity. Another definition, subset atomic, further requires that all atoms are species. We show that deciding whether a given network is subset atomic is in NP\textsf{NP}, and the problem "is a network subset atomic with respect to a given atom set" is strongly NP\textsf{NP}-Complete\textsf{Complete}. A third definition, reachably atomic, studied by Adleman, Gopalkrishnan et al., further requires that each species has a sequence of reactions splitting it into its constituent atoms. We show that there is a polynomial-time algorithm\textbf{polynomial-time algorithm} to decide whether a given network is reachably atomic, improving upon the result of Adleman et al. that the problem is decidable\textbf{decidable}. We show that the reachability problem for reachably atomic networks is Pspace\textsf{Pspace}-Complete\textsf{Complete}. Finally, we demonstrate equivalence relationships between our definitions and some special cases of another existing definition of atomicity due to Gnacadja
    • …
    corecore