3,969 research outputs found

    Complexity cores in average-case complexity theory

    Get PDF
    Complexity cores in average-case complexity theory In average-case complexity theory, one of the interesting questions is whether the existence of worst-case hard problems in NP implies the existence of problems in NP that are hard on average. In other words, `If P ≠NP then NP is not a subset of Average-P\u27. It is not known whether such worst-case to average-case connection exists for NP. However it is known that such connections exist for complexity classes such as EXP and PSPACE. This worst-case to average-case connections for classes such as EXP and PSPACE are obtained via random self-reductions. There is evidence that techniques used to obtain worst-case to average-case connections for EXP and PSPACE do not work for NP. In this thesis, we present an approach which may be helpful to establish worst-case and average-case connection for NP. Our approach is based on the notion of complexity cores. The main result is `If P ≠ NP and there is a language in NP whose complexity core belongs to NP, then NP is not a subset of Average-P\u27. Thus to exhibit a worst-case to average-case connection for NP, it suffices to show the existence of a language whose core is in NP

    Separating Cook Completeness from Karp-Levin Completeness Under a Worst-Case Hardness Hypothesis

    Get PDF
    We show that there is a language that is Turing complete for NP but not many-one complete for NP, under a worst-case hardness hypothesis. Our hypothesis asserts the existence of a non-deterministic, double-exponential time machine that runs in time O(2^2^n^c) (for some c > 1) accepting Sigma^* whose accepting computations cannot be computed by bounded-error, probabilistic machines running in time O(2^2^{beta * 2^n^c) (for some beta > 0). This is the first result that separates completeness notions for NP under a worst-case hardness hypothesis

    Making Models Match: Replicating an Agent-Based Model

    Get PDF
    Scientists have increasingly employed computer models in their work. Recent years have seen a proliferation of agent-based models in the natural and social sciences. But with the exception of a few "classic" models, most of these models have never been replicated by anyone but the original developer. As replication is a critical component of the scientific method and a core practice of scientists, we argue herein for an increased practice of replication in the agent-based modeling community, and for widespread discussion of the issues surrounding replication. We begin by clarifying the concept of replication as it applies to ABM. Furthermore we argue that replication may have even greater benefits when applied to computational models than when applied to physical experiments. Replication of computational models affects model verification and validation and fosters shared understanding about modeling decisions. To facilitate replication, we must create standards for both how to replicate models and how to evaluate the replication. In this paper, we present a case study of our own attempt to replicate a classic agent-based model. We begin by describing an agent-based model from political science that was developed by Axelrod and Hammond. We then detail our effort to replicate that model and the challenges that arose in recreating the model and in determining if the replication was successful. We conclude this paper by discussing issues for (1) researchers attempting to replicate models and (2) researchers developing models in order to facilitate the replication of their results.Replication, Agent-Based Modeling, Verification, Validation, Scientific Method, Ethnocentrism

    Proceedings of the Workshop Semantic Content Acquisition and Representation (SCAR) 2007

    Get PDF
    This is the proceedings of the Workshop on Semantic Content Acquisition and Representation, held in conjunction with NODALIDA 2007, on May 24 2007 in Tartu, Estonia.</p

    ETHNICITY AND CONFLICT: AN EMPIRICAL STUDY

    Get PDF
    This paper examines the impact of ethnic divisions on conflict. The analysis relies on a theoretical model of conflict (Esteban and Ray, 2010) in which equilibrium conflict is shown to be accurately described by a linear function of just three distributional indices of ethnic diversity: the Gini coefficient, the Hirschman-Herfindahl fractionalization index, and a measure of polarization. Based on a dataset constructed by James Fearon and data from Ethnologue on ethno-linguistic groups and the "linguistic distances" between them, we compute the three distribution indices. Our results show that ethnic polarization is a highly significant correlate of conflict. Fractionalization is also significant in some of the statistical exercises, but the Gini coefficient never is. In particular, inter-group distances computed from language and embodied in polarization measures turn out to be extremely important correlates of ethnic conflict.

    Non-Deterministic Communication Complexity of Regular Languages

    Full text link
    In this thesis, we study the place of regular languages within the communication complexity setting. In particular, we are interested in the non-deterministic communication complexity of regular languages. We show that a regular language has either O(1) or Omega(log n) non-deterministic complexity. We obtain several linear lower bound results which cover a wide range of regular languages having linear non-deterministic complexity. These lower bound results also imply a result in semigroup theory: we obtain sufficient conditions for not being in the positive variety Pol(Com). To obtain our results, we use algebraic techniques. In the study of regular languages, the algebraic point of view pioneered by Eilenberg (\cite{Eil74}) has led to many interesting results. Viewing a semigroup as a computational device that recognizes languages has proven to be prolific from both semigroup theory and formal languages perspectives. In this thesis, we provide further instances of such mutualism.Comment: Master's thesis, 93 page

    Unitary Complexity and the Uhlmann Transformation Problem

    Full text link
    State transformation problems such as compressing quantum information or breaking quantum commitments are fundamental quantum tasks. However, their computational difficulty cannot easily be characterized using traditional complexity theory, which focuses on tasks with classical inputs and outputs. To study the complexity of such state transformation tasks, we introduce a framework for unitary synthesis problems, including notions of reductions and unitary complexity classes. We use this framework to study the complexity of transforming one entangled state into another via local operations. We formalize this as the Uhlmann Transformation Problem, an algorithmic version of Uhlmann's theorem. Then, we prove structural results relating the complexity of the Uhlmann Transformation Problem, polynomial space quantum computation, and zero knowledge protocols. The Uhlmann Transformation Problem allows us to characterize the complexity of a variety of tasks in quantum information processing, including decoding noisy quantum channels, breaking falsifiable quantum cryptographic assumptions, implementing optimal prover strategies in quantum interactive proofs, and decoding the Hawking radiation of black holes. Our framework for unitary complexity thus provides new avenues for studying the computational complexity of many natural quantum information processing tasks.Comment: 126 pages, comments welcom

    Developments in Multi-Agent Fair Allocation

    Full text link
    Fairness is becoming an increasingly important concern when designing markets, allocation procedures, and computer systems. I survey some recent developments in the field of multi-agent fair allocation
    • …
    corecore