1,225 research outputs found

    Physical portrayal of computational complexity

    Get PDF
    Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because many natural processes have been recognized to complete in non-polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable because, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving problems in the class NP, decisions will affect subsequently available sets of decisions. The state space of a non-deterministic finite automaton is evolving due to the computation itself hence it cannot be efficiently contracted using a deterministic finite automaton that will arrive at a solution in super-polynomial time. The solution of the NP problem itself is verifiable in polynomial time (P) because the corresponding state is stationary. Likewise the class P set of states does not depend on computational history hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the class P set of states is inherently smaller than the set of class NP. Since the computational time to contract a given set is proportional to dissipation, the computational complexity class P is a subset of NP.Comment: 16, pages, 7 figure

    Average-energy games

    Get PDF
    Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.Comment: In Proceedings GandALF 2015, arXiv:1509.0685

    Minimizing finite automata is computationally hard

    Get PDF
    It is known that deterministic finite automata (DFAs) can be algorithmically minimized, i.e., a DFA M can be converted to an equivalent DFA M' which has a minimal number of states. The minimization can be done efficiently [6]. On the other hand, it is known that unambiguous finite automata (UFAs) and nondeterministic finite automata (NFAs) can be algorithmically minimized too, but their minimization problems turn out to be NP-complete and PSPACE-complete [8]. In this paper, the time complexity of the minimization problem for two restricted types of finite automata is investigated. These automata are nearly deterministic, since they only allow a small amount of non determinism to be used. On the one hand, NFAs with a fixed finite branching are studied, i.e., the number of nondeterministic moves within every accepting computation is bounded by a fixed finite number. On the other hand, finite automata are investigated which are essentially deterministic except that there is a fixed number of different initial states which can be chosen nondeterministically. The main result is that the minimization problems for these models are computationally hard, namely NP-complete. Hence, even the slightest extension of the deterministic model towards a nondeterministic one, e.g., allowing at most one nondeterministic move in every accepting computation or allowing two initial states instead of one, results in computationally intractable minimization problems

    Complexity, parallel computation and statistical physics

    Full text link
    The intuition that a long history is required for the emergence of complexity in natural systems is formalized using the notion of depth. The depth of a system is defined in terms of the number of parallel computational steps needed to simulate it. Depth provides an objective, irreducible measure of history applicable to systems of the kind studied in statistical physics. It is argued that physical complexity cannot occur in the absence of substantial depth and that depth is a useful proxy for physical complexity. The ideas are illustrated for a variety of systems in statistical physics.Comment: 21 pages, 7 figure

    Synthesizing Finite-state Protocols from Scenarios and Requirements

    Full text link
    Scenarios, or Message Sequence Charts, offer an intuitive way of describing the desired behaviors of a distributed protocol. In this paper we propose a new way of specifying finite-state protocols using scenarios: we show that it is possible to automatically derive a distributed implementation from a set of scenarios augmented with a set of safety and liveness requirements, provided the given scenarios adequately \emph{cover} all the states of the desired implementation. We first derive incomplete state machines from the given scenarios, and then synthesis corresponds to completing the transition relation of individual processes so that the global product meets the specified requirements. This completion problem, in general, has the same complexity, PSPACE, as the verification problem, but unlike the verification problem, is NP-complete for a constant number of processes. We present two algorithms for solving the completion problem, one based on a heuristic search in the space of possible completions and one based on OBDD-based symbolic fixpoint computation. We evaluate the proposed methodology for protocol specification and the effectiveness of the synthesis algorithms using the classical alternating-bit protocol.Comment: This is the working draft of a paper currently in submission. (February 10, 2014

    Bounded Rationality

    Get PDF
    The observation of the actual behavior by economic decision makers in the lab and in the field justifies that bounded rationality has been a generally accepted assumption in many socio-economic models. The goal of this paper is to illustrate the difficulties involved in providing a correct definition of what a rational (or irrational) agent is. In this paper we describe two frameworks that employ different approaches for analyzing bounded rationality. The first is a spatial segregation set-up that encompasses two optimization methodologies: backward induction and forward induction. The main result is that, even under the same state of knowledge, rational and non-rational agents may match their actions. The second framework elaborates on the relationship between irrationality and informational restrictions. We use the beauty contest (Nagel, 1995) as a device to explain this relationship.Behavioral economics, bounded rationality, partial information
    • …
    corecore