114,574 research outputs found

    The complexity of the list homomorphism problem for graphs

    Get PDF
    We completely classify the computational complexity of the list H-colouring problem for graphs (with possible loops) in combinatorial and algebraic terms: for every graph H the problem is either NP-complete, NL-complete, L-complete or is first-order definable; descriptive complexity equivalents are given as well via Datalog and its fragments. Our algebraic characterisations match important conjectures in the study of constraint satisfaction problems.Comment: 12 pages, STACS 201

    Descriptive Complexity, Computational Tractability, and the Logical and Cognitive Foundations of Mathematics

    Get PDF
    In computational complexity theory, decision problems are divided into complexity classes based on the amount of computational resources it takes for algorithms to solve them. In theoretical computer science, it is commonly accepted that only functions for solving problems in the complexity class P, solvable by a deterministic Turing machine in polynomial time, are considered to be tractable. In cognitive science and philosophy, this tractability result has been used to argue that only functions in P can feasibly work as computational models of human cognitive capacities. One interesting area of computational complexity theory is descriptive complexity, which connects the expressive strength of systems of logic with the computational complexity classes. In descriptive complexity theory, it is established that only first-order (classical) systems are connected to P, or one of its subclasses. Consequently, second-order systems of logic are considered to be computationally intractable, and may therefore seem to be unfit to model human cognitive capacities. This would be problematic when we think of the role of logic as the foundations of mathematics. In order to express many important mathematical concepts and systematically prove theorems involving them, we need to have a system of logic stronger than classical first-order logic. But if such a system is considered to be intractable, it means that the logical foundation of mathematics can be prohibitively complex for human cognition. In this paper I will argue, however, that this problem is the result of an unjustified direct use of computational complexity classes in cognitive modelling. Placing my account in the recent literature on the topic, I argue that the problem can be solved by considering computational complexity for humanly relevant problem solving algorithms and input sizes.Peer reviewe

    The Almost Equivalence by Asymptotic Probabilities for Regular Languages and Its Computational Complexities

    Full text link
    We introduce p-equivalence by asymptotic probabilities, which is a weak almost-equivalence based on zero-one laws in finite model theory. In this paper, we consider the computational complexities of p-equivalence problems for regular languages and provide the following details. First, we give an robustness of p-equivalence and a logical characterization for p-equivalence. The characterization is useful to generate some algorithms for p-equivalence problems by coupling with standard results from descriptive complexity. Second, we give the computational complexities for the p-equivalence problems by the logical characterization. The computational complexities are the same as for the (fully) equivalence problems. Finally, we apply the proofs for p-equivalence to some generalized equivalences.Comment: In Proceedings GandALF 2016, arXiv:1609.0364

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model\u27s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model\u27s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, process-based, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model\u27s descriptive complexity – also in bit

    Technical note: “Bit by bit”: a practical and general approach for evaluating model computational complexity vs. model performance

    Get PDF
    One of the main objectives of the scientific enterprise is the development of well-performing yet parsimonious models for all natural phenomena and systems. In the 21st century, scientists usually represent their models, hypotheses, and experimental observations using digital computers. Measuring performance and parsimony of computer models is therefore a key theoretical and practical challenge for 21st century science. “Performance” here refers to a model’s ability to reduce predictive uncertainty about an object of interest. “Parsimony” (or complexity) comprises two aspects: descriptive complexity – the size of the model itself which can be measured by the disk space it occupies – and computational complexity – the model’s effort to provide output. Descriptive complexity is related to inference quality and generality; computational complexity is often a practical and economic concern for limited computing resources. In this context, this paper has two distinct but related goals. The first is to propose a practical method of measuring computational complexity by utility software “Strace”, which counts the total number of memory visits while running a model on a computer. The second goal is to propose the “bit by bit” method, which combines measuring computational complexity by “Strace” and measuring model performance by information loss relative to observations, both in bit. For demonstration, we apply the “bit by bit” method to watershed models representing a wide diversity of modelling strategies (artificial neural network, auto-regressive, processbased, and others). We demonstrate that computational complexity as measured by “Strace” is sensitive to all aspects of a model, such as the size of the model itself, the input data it reads, its numerical scheme, and time stepping. We further demonstrate that for each model, the bit counts for computational complexity exceed those for performance by several orders of magnitude and that the differences among the models for both computational complexity and performance can be explained by their setup and are in accordance with expectations. We conclude that measuring computational complexity by “Strace” is practical, and it is also general in the sense that it can be applied to any model that can be run on a digital computer. We further conclude that the “bit by bit” approach is general in the sense that it measures two key aspects of a model in the single unit of bit. We suggest that it can be enhanced by additionally measuring a model’s descriptive complexity – also in bit.info:eu-repo/semantics/publishedVersio

    Tractability and the computational mind

    Get PDF
    We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., time or memory, to be practically computable. Computational complexity theory is concerned with the amount of resources required for the execution of algorithms and, hence, the inherent difficulty of computational problems. An important goal of computational complexity theory is to categorize computational problems via complexity classes, and in particular, to identify efficiently solvable problems and draw a line between tractability and intractability. We survey how complexity can be used to study computational plausibility of cognitive theories. We especially emphasize methodological and mathematical assumptions behind applying complexity theory in cognitive science. We pay special attention to the examples of applying logical and computational complexity toolbox in different domains of cognitive science. We focus mostly on theoretical and experimental research in psycholinguistics and social cognition

    Applications of Finite Model Theory: Optimisation Problems, Hybrid Modal Logics and Games.

    Get PDF
    There exists an interesting relationships between two seemingly distinct fields: logic from the field of Model Theory, which deals with the truth of statements about discrete structures; and Computational Complexity, which deals with the classification of problems by how much of a particular computer resource is required in order to compute a solution. This relationship is known as Descriptive Complexity and it is the primary application of the tools from Model Theory when they are restricted to the finite; this restriction is commonly called Finite Model Theory. In this thesis, we investigate the extension of the results of Descriptive Complexity from classes of decision problems to classes of optimisation problems. When dealing with decision problems the natural mapping from true and false in logic to yes and no instances of a problem is used but when dealing with optimisation problems, other features of a logic need to be used. We investigate what these features are and provide results in the form of logical frameworks that can be used for describing optimisation problems in particular classes, building on the existing research into this area. Another application of Finite Model Theory that this thesis investigates is the relative expressiveness of various fragments of an extension of modal logic called hybrid modal logic. This is achieved through taking the Ehrenfeucht-Fraïssé game from Model Theory and modifying it so that it can be applied to hybrid modal logic. Then, by developing winning strategies for the players in the game, results are obtained that show strict hierarchies of expressiveness for fragments of hybrid modal logic that are generated by varying the quantifier depth and the number of proposition and nominal symbols available

    The computational complexity of boundedly rational choice behavior

    Get PDF
    This research examines the computational complexity of two boundedly rational choice models that use multiple rationales to explain observed choice behavior. First, we show that the notion of rationalizability by K rationales as introduced by Kalai, Rubinstein, and Spiegler (2002) is NP-complete for K greater or equal to two. Second, we show that the question of sequential rationalizability by K rationales, introduced by Manzini and Mariotti (2007), is NP-complete for K greater or equal to three if choices are single valued, and that it is NP-complete for K greater or equal to one if we allow for multi-valued choice correspondences. Motivated by these results, we present two binary integer feasibility programs that characterize the two boundedly rational choice models and we compute their power. Finally, by using results from descriptive complexity theory, we explain why it has been so difficult to obtain `nice' characterizations for these models.
    corecore