10 research outputs found

    Speedup for Natural Problems and Noncomputability

    Get PDF
    A resource-bounded version of the statement "no algorithm recognizes all non-halting Turing machines" is equivalent to an infinitely often (i.o.) superpolynomial speedup for the time required to accept any coNP-complete language and also equivalent to a superpolynomial speedup in proof length in propositional proof systems for tautologies, each of which implies P!=NP. This suggests a correspondence between the properties 'has no algorithm at all' and 'has no best algorithm' which seems relevant to open problems in computational and proof complexity.Comment: 8 page

    Average-Case Hardness of Proving Tautologies and Theorems

    Full text link
    We consolidate two widely believed conjectures about tautologies -- no optimal proof system exists, and most require superpolynomial size proofs in any system -- into a pp-isomorphism-invariant condition satisfied by all paddable coNP\textbf{coNP}-complete languages or none. The condition is: for any Turing machine (TM) MM accepting the language, P\textbf{P}-uniform input families requiring superpolynomial time by MM exist (equivalent to the first conjecture) and appear with positive upper density in an enumeration of input families (implies the second). In that case, no such language is easy on average (in AvgP\textbf{AvgP}) for a distribution applying non-negligible weight to the hard families. The hardness of proving tautologies and theorems is likely related. Motivated by the fact that arithmetic sentences encoding "string xx is Kolmogorov random" are true but unprovable with positive density in a finitely axiomatized theory T\mathcal{T} (Calude and J{\"u}rgensen), we conjecture that any propositional proof system requires superpolynomial size proofs for a dense set of P\textbf{P}-uniform families of tautologies encoding "there is no T\mathcal{T} proof of size ≤t\leq t showing that string xx is Kolmogorov random". This implies the above condition. The conjecture suggests that there is no optimal proof system because undecidable theories help prove tautologies and do so more efficiently as axioms are added, and that constructing hard tautologies seems difficult because it is impossible to construct Kolmogorov random strings. Similar conjectures that computational blind spots are manifestations of noncomputability would resolve other open problems

    Computational complexity of the landscape I

    Get PDF
    We study the computational complexity of the physical problem of finding vacua of string theory which agree with data, such as the cosmological constant, and show that such problems are typically NP hard. In particular, we prove that in the Bousso-Polchinski model, the problem is NP complete. We discuss the issues this raises and the possibility that, even if we were to find compelling evidence that some vacuum of string theory describes our universe, we might never be able to find that vacuum explicitly. In a companion paper, we apply this point of view to the question of how early cosmology might select a vacuum.Comment: JHEP3 Latex, 53 pp, 2 .eps figure

    Simplicial Quantum Gravity

    Get PDF
    This is my PhD thesis on four-dimensional simplicial quantum gravity using the dynamical triangulation model. Most of the results we have published in separate papers are collected here for your convenience. Some new results have been added as well. Besides these results this thesis also contains an introduction to simplicial quantum gravity and a detailed description of my dynamical triangulation program for arbitrary dimension. Some small formal parts are in Dutch.Comment: 160 pages, PostScript (because you don't have all the mf fonts), replaced because of file corruptio

    Weak Completeness Notions for Exponential Time

    Get PDF
    Abstract The standard way for proving a problem to be intractable is to show that the problem is hard or complete for one of the standard complexity classes containing intractable problems. Lutz (1995) proposed a generalization of this approach by introducing more general weak hardness notions which still imply intractability. While a set A is hard for a class C if all problems in C can be reduced to A (by a polynomial-time bounded many-one reduction) and complete if it is hard and a member of C, Lutz proposed to call a set A weakly hard if a nonnegligible part of C can be reduced to A and to call A weakly complete if in addition A 2 C. For the exponential-time classes E = DTIME(2lin) and EXP = DTIME(2poly), Lutz formalized these ideas by introducing resource bounded (Lebesgue) measures on these classes and by saying that a subclass of E is negligible if it has measure 0 in E (and similarly for EXP). A variant of these concepts, based on resource bounded Baire category in place of measure, was introduced by Ambos-Spies (1996) where now a class is declared to be negligible if it is meager in the corresponding resource bounded sense. In our thesis we introduce and investigate new, more general, weak hardness notions for E and EXP and compare them with the above concepts from the literature. The two main new notions we introduce are nontriviality, which may be viewed as the most general weak hardness notion, and strong nontriviality. In case of E, a set A is E-nontrivial if, for any k 1, A has a predecessor in E which is 2kn complex, i.e., which can only be computed by Turing machines with run times exceeding 2kn on infinitely many inputs; and A is strongly E-nontrivial if there are predecessors which are almost everywhere 2kn complex. Besides giving examples and structural properties of the E-(non)trivial and strongly E-(non)trivial sets, we separate all weak hardness concepts for E, compare the corresponding concepts for E and EXP, answer the question whether (strongly) E-nontrivial sets are typical among the sets in E (or among the computable sets, or among all sets), investigate the degrees of the (strongly) E-nontrivial sets, and analyze the strength of these concepts if we replace the underlying p-m-reducibility by some weaker polynomial-time reducibilities

    A method for system of systems definition and modeling using patterns of collective behavior

    Get PDF
    The Department of Defense ship and aircraft acquisition process, with its capability-based assessments and fleet synthesis studies, relies heavily on the assumption that a functional decomposition of higher-level system of systems (SoS) capabilities into lower-level system and subsystem behaviors is both possible and practical. However, SoS typically exhibit “non-decomposable” behaviors (also known as emergent behaviors) for which no widely-accepted representation exists. The presence of unforeseen emergent behaviors, particularly undesirable ones, can make systems vulnerable to attacks, hacks, or other exploitation, or can cause delays in acquisition program schedules and cost overruns in order to mitigate them. The International Council on Systems Engineering has identified the development of methods for predicting and managing emergent behaviors as one of the top research priorities for the Systems Engineering profession. Therefore, this thesis develops a method for rendering quantifiable SoS emergent properties and behaviors traceable to patterns of interaction of their constitutive systems, so that exploitable patterns identified during the early stages of design can be accounted for. This method is designed to fill two gaps in the literature. First, the lack of an approach for mining data to derive a model (i.e. an equation) of the non-decomposable behavior. Second, the lack of an approach for qualitatively and quantitatively associating emergent behaviors with the components that cause the behavior. A definition for emergent behavior is synthesized from the literature, as well as necessary conditions for its identification. An ontology of emergence that enables studying the emergent behaviors exhibited by self-organized systems via numerical simulations is adapted for this thesis in order to develop the mathematical approach needed to satisfy the research objective. Within the confines of two carefully qualified assumptions (that the model is valid, and that the model is efficient), it is argued that simulated emergence is bona-fide emergence, and that simulations can be used for experimentation without sacrificing rigor. This thesis then puts forward three hypotheses: The first hypothesis is that self-organized structures imply the presence of a form of data compression, and this compression can be used to explicitly calculate an upper bound on the number of emergent behaviors that a system can possess. The second hypothesis is that the set of numerical criteria for detecting emergent behavior derived in this research constitutes sufficient conditions for identifying weak and functional emergent behaviors. The third hypothesis states that affecting the emergent properties of these systems will have a bigger impact on the system’s performance than affecting any single component of that system. Using the method developed in this thesis, exploitable properties are identified and component behaviors are modified to attempt the exploit. Changes in performance are evaluated using problem-specific measures of merit. The experiments find that Hypothesis 2 is false (the numerical criteria are not sufficient conditions) by identifying instances where the numerical criteria produce a false-positive. As a result, a set of sufficient conditions for emergent behavior identification remains to be found. Hypothesis 1 was also falsified based on a worst-case scenario where the largest possible number of obtainable emergent behaviors was compared against the upper bound computed from the smallest possible data compression of a self-organized system. Hypothesis 3, on the other hand, was supported, as it was found that new behavior rules based on component-level properties provided less improvement to performance against an adversary than rules based on system-level properties. Overall, the method is shown to be an effective, systematic approach to non-decomposable behavior exploitation, and an improvement over the modern, largely ad hoc approach.Ph.D

    Three Dogmas of First-Order Logic and some Evidence-based Consequences for Constructive Mathematics of differentiating between Hilbertian Theism, Brouwerian Atheism and Finitary Agnosticism

    Get PDF
    We show how removing faith-based beliefs in current philosophies of classical and constructive mathematics admits formal, evidence-based, definitions of constructive mathematics; of a constructively well-defined logic of a formal mathematical language; and of a constructively well-defined model of such a language. We argue that, from an evidence-based perspective, classical approaches which follow Hilbert's formal definitions of quantification can be labelled `theistic'; whilst constructive approaches based on Brouwer's philosophy of Intuitionism can be labelled `atheistic'. We then adopt what may be labelled a finitary, evidence-based, `agnostic' perspective and argue that Brouwerian atheism is merely a restricted perspective within the finitary agnostic perspective, whilst Hilbertian theism contradicts the finitary agnostic perspective. We then consider the argument that Tarski's classic definitions permit an intelligence---whether human or mechanistic---to admit finitary, evidence-based, definitions of the satisfaction and truth of the atomic formulas of the first-order Peano Arithmetic PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways. We show that the two definitions correspond to two distinctly different---not necessarily evidence-based but complementary---assignments of satisfaction and truth to the compound formulas of PA over N. We further show that the PA axioms are true over N, and that the PA rules of inference preserve truth over N, under both the complementary interpretations; and conclude some unsuspected constructive consequences of such complementarity for the foundations of mathematics, logic, philosophy, and the physical sciences
    corecore