106 research outputs found

    A Casual Tour Around a Circuit Complexity Bound

    Full text link
    I will discuss the recent proof that the complexity class NEXP (nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial size. The proof will be described from the perspective of someone trying to discover it.Comment: 21 pages, 2 figures. An earlier version appeared in SIGACT News, September 201

    Superlinear Lower Bounds Based on ETH

    Get PDF

    Superlinear lower bounds based on ETH

    Get PDF
    Andras Z. Salamon acknowledges support from EPSRC grants EP/P015638/1 and EP/V027182/1.We introduce techniques for proving superlinear conditional lower bounds for polynomial time problems. In particular, we show that CircuitSAT for circuits with m gates and log(m) inputs (denoted by log-CircuitSAT) is not decidable in essentially-linear time unless the exponential time hypothesis (ETH) is false and k-Clique is decidable in essentially-linear time in terms of the graph's size for all fixed k. Such conditional lower bounds have previously only been demonstrated relative to the strong exponential time hypothesis (SETH). Our results therefore offer significant progress towards proving unconditional s uperlinear time complexity lower bounds for natural problems in polynomial time.Postprin

    Computation with narrow CTCs

    Full text link
    We examine some variants of computation with closed timelike curves (CTCs), where various restrictions are imposed on the memory of the computer, and the information carrying capacity and range of the CTC. We give full characterizations of the classes of languages recognized by polynomial time probabilistic and quantum computers that can send a single classical bit to their own past. Such narrow CTCs are demonstrated to add the power of limited nondeterminism to deterministic computers, and lead to exponential speedup in constant-space probabilistic and quantum computation. We show that, given a time machine with constant negative delay, one can implement CTC-based computations without the need to know about the runtime beforehand.Comment: 16 pages. A few typo was correcte

    The RAM equivalent of P vs. RP

    Full text link
    One of the fundamental open questions in computational complexity is whether the class of problems solvable by use of stochasticity under the Random Polynomial time (RP) model is larger than the class of those solvable in deterministic polynomial time (P). However, this question is only open for Turing Machines, not for Random Access Machines (RAMs). Simon (1981) was able to show that for a sufficiently equipped Random Access Machine, the ability to switch states nondeterministically does not entail any computational advantage. However, in the same paper, Simon describes a different (and arguably more natural) scenario for stochasticity under the RAM model. According to Simon's proposal, instead of receiving a new random bit at each execution step, the RAM program is able to execute the pseudofunction RAND(y)\textit{RAND}(y), which returns a uniformly distributed random integer in the range [0,y)[0,y). Whether the ability to allot a random integer in this fashion is more powerful than the ability to allot a random bit remained an open question for the last 30 years. In this paper, we close Simon's open problem, by fully characterising the class of languages recognisable in polynomial time by each of the RAMs regarding which the question was posed. We show that for some of these, stochasticity entails no advantage, but, more interestingly, we show that for others it does.Comment: 23 page

    Effective guessing has unlikely consequences

    Get PDF
    Funding: EPSRC Grant number EP/P015638/1.A classic result of Paul, Pippenger, Szemeredi and Trotter states that DTIME(n) ⊊ NTIME(n). The natural question then arises: could the inclusion DTIME(t (n)) ⊆ NTIME(n) hold for some superlinear time-constructible function t (n)? If such a function t (n) does exist, then there also exist effective nondeterministic guessing strategies to speed up deterministic computations. In this work, we prove limitations on the effectiveness of nondeterministic guessing to speed up deterministic computations by showing that the existence of effective nondeterministic guessing strategies would have unlikely consequences. In particular, we show that if a subpolynomial amount of nondeterministic guessing could be used to speed up deterministic computation by a polynomial factor, then P ⊊ NTIME(n). Furthermore, even achieving a logarithmic speedup at the cost of making every step nondeterministic would show that SAT ∈ NTIME(n) under appropriate encodings. Of possibly independent interest, under such encodings we also show that SAT can be decided in O(n log n) steps on a nondeterministic multitape Turing machine, improving on the well-known O(n(log n)c) bound for some constant but undetermined exponent c ≄ 1.Publisher PDFPeer reviewe

    A uniform approach to the complexity and analysis of succinct systems

    Get PDF
    “ This thesis provides a unifying view on the succinctness of systems: the capability of a modeling formalism to describe the behavior of a system of exponential size using a polynomial syntax. The key theoretical contribution is the introduction of sequential circuit machines as a new universal computation model that focuses on succinctness as the central aspect. The thesis demonstrates that many well-known modeling formalisms such as communicating state machines, linear-time temporal logic, or timed automata exhibit an immediate connection to this machine model. Once a (syntactic) connection is established, many complexity bounds for structurally restricted sequential circuit machines can be transferred to a certain formalism in a uniform manner. As a consequence, besides a far-reaching unification of independent lines of research, we are also able to provide matching complexity bounds for various analysis problems, whose complexities were not known so far. For example, we establish matching lower and upper bounds of the small witness problem and several variants of the bounded synthesis problem for timed automata, a particularly important succinct modeling formalism. Also for timed automata, our complexity-theoretic analysis leads to the identification of tractable fragments of the timed synthesis problem under partial observability. Specifically, we identify timed controller synthesis based on discrete or template-based controllers to be equivalent to model checking. Based on this discovery, we develop a new model checking-based algorithm to efficiently find feasible template instantiations. From a more practical perspective, this thesis also studies the preservation of succinctness in analysis algorithms using symbolic data structures. While efficient techniques exist for specific forms of succinctness considered in isolation, we present a general approach based on abstraction refinement to combine off-the-shelf symbolic data structures. In particular, for handling the combination of concurrency and quantitative timing behavior in networks of timed automata, we report on the tool Synthia which combines binary decision diagrams with difference bound matrices. In a comparison with the timed model checker Uppaal and the timed game solver Tiga running on standard benchmarks from the timed model checking and synthesis domain, respectively, the experimental results clearly demonstrate the effectiveness of our new approach.Diese Dissertation liefert eine vereinheitlichende Sicht auf die Kompaktheit von Systemen: die FĂ€higkeit eines Modellierungsformalismus, das Verhalten eines Systems exponentieller GrĂ¶ĂŸe mit polynomieller Syntax zu beschreiben. Der wesentliche theoretische Beitrag ist die EinfĂŒhrung von sequenziellen Schaltkreis-Maschinen als neues universelles Berechnungsmodell, das sich auf den zentralen Aspekt der Kompaktheit konzentriert. Die Dissertation demonstriert, dass viele bekannte Modellierungsformalismen, wie z.B. kommunizierende Zustandsmaschinen, linear-Zeit temporale Logik (LTL) oder gezeitete Automaten eine direkte Verbindung zu diesem Maschinenmodell aufzeigen. Sobald eine (syntaktische) Verbindung hergestellt ist, können viele KomplexitĂ€tsschranken fĂŒr strukturell beschrĂ€nkte sequenzielle Schaltkreis-Maschinen fĂŒr einen bestimmten Formalismus einheitlich ĂŒbernommen werden. Neben einer weitreichenden Vereinheitlichung unabhĂ€ngiger Forschungsrichtungen können auch zahlreiche KomplexitĂ€tsschranken fĂŒr Analyse-Probleme etabliert werden, deren genaue KomplexitĂ€t bisher noch nicht bekannt war. Zum Beispiel werden passende untere und obere Schranken des small witness Problems und mehrere Varianten des Synthese-Problems von Controllern mit beschrĂ€nkter GrĂ¶ĂŸe fĂŒr gezeitete Automaten bewiesen. Die theoretische Analyse deckt Fragmente geringerer KomplexitĂ€t des partiell informierten Syntheseproblems fĂŒr gezeitete Automaten auf. Es wird im Besonderen gezeigt, dass das gezeitete Syntheseproblem fĂŒr diskrete oder Vorlagen-basierte Controller Ă€quivalent zum Model Checking-Problem ist. Basierend auf dieser Einsicht wird ein neuartiger Model Checking-basierter Algorithmus zur effizienten Synthese von gĂŒltigen Instantiierungen von Vorlagen entwickelt. Der praktische Beitrag der Dissertation untersucht die Erhaltung von Kompaktheit in Analyse-Algorithmen durch die Benutzung symbolischer Datenstrukturen. Es wird ein allgemeiner Ansatz zur Kombination von Standard-Datenstrukturen vorgestellt, die jeweils bisher nur in Isolation verwendet werden konnten. Insbesondere wird fĂŒr die Analyse von Netzwerken von gezeiteten Automaten das Tool Synthia vorgestellt, welches binĂ€re Entscheidungs-Diagramme mit Differenzen-Matrizen verbindet. In einem experimentellen Vergleich mit den Tools Uppaal und Tiga wird klar die EffektivitĂ€t des neuen Ansatzes belegt
    • 

    corecore