112 research outputs found

    Parameterized Complexity of Binary CSP: Vertex Cover, Treedepth, and Related Parameters

    Get PDF
    We investigate the parameterized complexity of Binary CSP parameterized by the vertex cover number and the treedepth of the constraint graph, as well as by a selection of related modulator-based parameters. The main findings are as follows: - Binary CSP parameterized by the vertex cover number is W[3]-complete. More generally, for every positive integer d, Binary CSP parameterized by the size of a modulator to a treedepth-d graph is W[2d+1]-complete. This provides a new family of natural problems that are complete for odd levels of the W-hierarchy. - We introduce a new complexity class XSLP, defined so that Binary CSP parameterized by treedepth is complete for this class. We provide two equivalent characterizations of XSLP: the first one relates XSLP to a model of an alternating Turing machine with certain restrictions on conondeterminism and space complexity, while the second one links XSLP to the problem of model-checking first-order logic with suitably restricted universal quantification. Interestingly, the proof of the machine characterization of XSLP uses the concept of universal trees, which are prominently featured in the recent work on parity games. - We describe a new complexity hierarchy sandwiched between the W-hierarchy and the A-hierarchy: For every odd t, we introduce a parameterized complexity class S[t] with W[t] ? S[t] ? A[t], defined using a parameter that interpolates between the vertex cover number and the treedepth. We expect that many of the studied classes will be useful in the future for pinpointing the complexity of various structural parameterizations of graph problems

    Algorithms and lower bounds in finite automata size complexity

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 97-99).In this thesis we investigate the relative succinctness of several types of finite automata, focusing mainly on the following four basic models: one-way deterministic (1)FAs), one-way nondeterministic (1NFAs), two-way deterministic (2DFAS), and two-way nondeterministic (2NFAS). First, we establish the exact values of the trade-offs for all conversions from two-way to one-way automata. Specifically, we prove that the functions ... return the exact values of the trade-offs from 2DFAS to 1DFAS, from 2NFAS to 1DFAs, and from 2DFAs or 2NFAS to 1NFAs, respectively. Second, we examine the question whether the trade-offs from NFAs or 2NFAS to 2DiFAs are polynomial or not. We prove two theorems for liveness, the complete problem for the conversion from 1NFAS to 2DFAS. We first focus on moles, a restricted class of 2NFAs that includes the polynomially large 1NFAS which solve liveness. We prove that, in contrast, 2DFA moles cannot solve liveness, irrespective of size.(cont.) We then focus on sweeping 2NFAS, which can change the direction of their input head only on the end-markers. We prove that all sweeping 2NFAs solving the complement of liveness are of exponential size. A simple modification of this argument also proves that the trade-off from 2DFAS to sweeping 2NFAS is exponential. Finally, we examine conversions between two-way automata with more than one head-like devices (e.g., heads, linearly bounded counters, pebbles). We prove that, if the automata of some type A have enough resources to (i) solve problems that no automaton of some other type B can solve, and (ii) simulate any unary 2DFA that has additional access to a linearly-bounded counter, then the trade-off from automata of type A to automata of type B admits no recursive upper bound.by Christos Kapoutsis.Ph.D

    Parameterized Complexity of Binary CSP: Vertex Cover, Treedepth, and Related Parameters

    Get PDF
    We investigate the parameterized complexity of Binary CSP parameterized by the vertex cover number and the treedepth of the constraint graph, as well as by a selection of related modulator-based parameters. The main findings are as follows: Binary CSP parameterized by the vertex cover number is W[3]-complete. More generally, for every positive integer d, Binary CSP parameterized by the size of a modulator to a treedepth-d graph is W[2d + 1]-complete. This provides a new family of natural problems that are complete for odd levels of the W-hierarchy. We introduce a new complexity class XSLP, defined so that Binary CSP parameterized by treedepth is complete for this class. We provide two equivalent characterizations of XSLP: the first one relates XSLP to a model of an alternating Turing machine with certain restrictions on conondeterminism and space complexity, while the second one links XSLP to the problem of model-checking first-order logic with suitably restricted universal quantification. Interestingly, the proof of the machine characterization of XSLP uses the concept of universal trees, which are prominently featured in the recent work on parity games. We describe a new complexity hierarchy sandwiched between the W-hierarchy and the A-hierarchy: For every odd t, we introduce a parameterized complexity class S[t] with W[t] ⊆ S[t] ⊆ A[t], defined using a parameter that interpolates between the vertex cover number and the treedepth. We expect that many of the studied classes will be useful in the future for pinpointing the complexity of various structural parameterizations of graph problems

    Block Rigidity: Strong Multiplayer Parallel Repetition Implies Super-Linear Lower Bounds for Turing Machines

    Get PDF
    We prove that a sufficiently strong parallel repetition theorem for a special case of multiplayer (multiprover) games implies super-linear lower bounds for multi-tape Turing machines with advice. To the best of our knowledge, this is the first connection between parallel repetition and lower bounds for time complexity and the first major potential implication of a parallel repetition theorem with more than two players. Along the way to proving this result, we define and initiate a study of block rigidity, a weakening of Valiant's notion of rigidity. While rigidity was originally defined for matrices, or, equivalently, for (multi-output) linear functions, we extend and study both rigidity and block rigidity for general (multi-output) functions. Using techniques of Paul, Pippenger, Szemer\'edi and Trotter, we show that a block-rigid function cannot be computed by multi-tape Turing machines that run in linear (or slightly super-linear) time, even in the non-uniform setting, where the machine gets an arbitrary advice tape. We then describe a class of multiplayer games, such that, a sufficiently strong parallel repetition theorem for that class of games implies an explicit block-rigid function. The games in that class have the following property that may be of independent interest: for every random string for the verifier (which, in particular, determines the vector of queries to the players), there is a unique correct answer for each of the players, and the verifier accepts if and only if all answers are correct. We refer to such games as independent games. The theorem that we need is that parallel repetition reduces the value of games in this class from vv to vΩ(n)v^{\Omega(n)}, where nn is the number of repetitions. As another application of block rigidity, we show conditional size-depth tradeoffs for boolean circuits, where the gates compute arbitrary functions over large sets.Comment: 17 pages, ITCS 202

    Towards faster numerical solution of Continuous Time Markov Chains stored by symbolic data structures

    Get PDF
    This work considers different aspects of model-based performance- and dependability analysis. This research area analyses systems (e.g. computer-, telecommunication- or production-systems) in order to quantify their performance and reliability. Such an analysis can be carried out already in the planning phase, without a physically existing system. All aspects treated in this work are based on finite state spaces (i.e. the models only have finitely many states) and a representation of the state graphs by Multi-Terminal Binary Decision Diagrams (MTBDDs). Currently, there are many tools that transform high-level model specifications (e.g. process algebra or Petri-Net) to low-level models (e.g. Markov chains). Markov chains can be represented by sparse matrices. For complex models very large state spaces may occur (this phenomenon is called state space explosion in the literature) and accordingly very large matrices representing the state graphs. The problem of building the model from the specification and storing the state graph can be regarded as solved: There are heuristics for compactly storing the state graph by MTBDD or Kronecker data structure and there are efficient algorithms for the model generation and functional analysis. For the quantitative analysis there are still problems due to the size of the underlying state space. This work provides some methods to alleviate the problems in case of MTBDD-based storage of the state graph. It is threefold: 1. For the generation of smaller state graphs in the model generation phase (which usually are easier to solve) a symbolic elimination algorithm is developed. 2. For the calculation of steady-state probabilities of Markov chains a multilevel algorithm is developed which allows for faster solutions. 3. For calculating the most probable paths in a state graph, the mean time to the first failure of a system and related measures, a path-based solver is developed

    Note on the succinctness of deterministic, nondeterministic, probabilistic and quantum finite automata

    Get PDF
    We investigate the succinctness of several kinds of unary automata by studying their state complexity in accepting the family {Lm} of cyclic languages, where Lm = {akm|k 08 N}. In particular, we show that, for any m, the number of states necessary and sufficient for accepting the unary language Lm with isolated cut point on one-way probabilistic finite automata is p1\u3b11 + p2\u3b12 + ef + ps\u3b1s, with p1\u3b11p2\u3b12 ef ps\u3b1s being the factorization of m. To prove this result, we give a general state lower bound for accepting unary languages with isolated cut point on the one-way probabilistic model. Moreover, we exhibit one-way quantum finite automata that, for any m, accept Lm with isolated cut point and only two states. These results are settled within a survey on unary automata aiming to compare the descriptional power of deterministic, nondeterministic, probabilistic and quantum paradigms

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF

    Module checking of pushdown multi-agent systems

    Get PDF
    In this paper, we investigate the module-checking problem of pushdown multi-agent systems (PMS) against ATL and ATL* specifications. We establish that for ATL, module checking of PMS is 2EXPTIME-complete, which is the same complexity as pushdown module-checking for CTL. On the other hand, we show that ATL* module-checking of PMS turns out to be 4EXPTIME-complete, hence exponentially harder than both CTL* pushdown module-checking and ATL* model-checking of PMS. Our result for ATL* provides a rare example of a natural decision problem that is elementary yet but with a complexity that is higher than triply exponential-time.Comment: arXiv admin note: substantial text overlap with arXiv:1709.0210

    Logic and Automata

    Get PDF
    Mathematical logic and automata theory are two scientific disciplines with a fundamentally close relationship. The authors of Logic and Automata take the occasion of the sixtieth birthday of Wolfgang Thomas to present a tour d'horizon of automata theory and logic. The twenty papers in this volume cover many different facets of logic and automata theory, emphasizing the connections to other disciplines such as games, algorithms, and semigroup theory, as well as discussing current challenges in the field

    A temporal logic approach to information-flow control

    Get PDF
    Information leaks and other violations of information security pose a severe threat to individuals, companies, and even countries. The mechanisms by which attackers threaten information security are diverse and to show their absence thus proved to be a challenging problem. Information-flow control is a principled approach to prevent security incidents in programs and other technical systems. In information-flow control we define information-flow properties, which are sufficient conditions for when the system is secure in a particular attack scenario. By defining the information-flow property only based on what parts of the executions of the system a potential attacker can observe or control, we obtain security guarantees that are independent of implementation details and thus easy to understand. There are several methods available to enforce (or verify) information-flow properties once defined. We focus on static enforcement methods, which automatically determine whether a given system satisfies a given information-flow property for all possible inputs to the system. Most enforcement approaches that are available today have one problem in common: they each only work for one particular programming language or information-flow property. In this thesis, we propose a temporal logic approach to information-flow control to provide a simple formal basis for the specification and enforcement of information-flow properties. We show that the approach can be used to enforce a wide range of information-flow properties with a single algorithm. The main challenge is that the standard temporal logics are unable to express information-flow properties. They lack the ability to relate multiple executions of a system, which is essential for information-flow properties. We thus extend the temporal logics LTL and CTL* by the ability to quantify over multiple executions and to relate them using boolean and temporal operators. The resulting temporal logics HyperLTL and HyperCTL* can express many information-flow properties of interest. The extension of temporal logics com- pels us to revisit the algorithmic problem to check whether a given system (model) satisfies a given specification in HyperLTL or HyperCTL*; also called the model checking problem. On the technical side, the main contribution is a model checking algorithm for HyperLTL and HyperCTL* and the detailed complexity analysis of the model checking problem: We give nonelementary lower and upper bounds for its computational complexity, both in the size of the system and the size of the specification. The complexity analysis also reveals a class of specification, which includes many of the commonly consid- ered information-flow properties and for which the algorithm is efficient (in NLOGSPACE in the size of the system). For this class of efficiently checkable properties, we provide an approach to reuse existing technology in hardware model checking for information-flow control. We demonstrate along a case study that the temporal logic approach to information-flow control is flexible and effective. We further provide two case studies that demonstrate the use of HyperLTL and HyperCTL* for proving properties of error resistant codes and distributed protocols that have so far only been considered in manual proofs.Informationssicherheit stellt eine immer größere Bedrohung für einzelne Personen, Firmen und selbst ganze Länder dar. Ein grundlegender Ansatz zur Vorbeugung von Sicherheitsproblemen in technischen Systemen, wie zum Beispiel Programmen, ist Informationsflusskontrolle. In der Informationsflusskontrolle definieren wir zunächst sogenannte Informationsflusseigenschaften, welche hinreichende Bedingungen für die Sicherheit des gegebenen Systems in einem Sicherheitsszenario darstellen. Indem wir Informationsflusseigenschaften nur auf Basis der möglichen Beobachtungen eines Angreifers über das System definieren, erhalten wir einfach zu verstehende Sicherheitsgarantien, die unabhängig von Implementierungsdetails sind. Nach der Definition von Eigenschaften muss sichergestellt werden, dass ein gegebenes System seine Informationsflusseigenschaft erfüllt, wofür es bereits verschiedene Methoden gibt. Wir fokussieren uns in dieser Arbeit auf statische Methoden, welche für ein gegebenes System und eine gegebene Informationsflusseigenschaft automatisch entscheiden, ob das System die Eigenschaft für alle möglichen Eingaben erfüllt, was wir auch das Modellprüfungsproblem nennen. Die meisten verfügbaren Methoden zum Sicherstellen der Einhaltung von Informationsflusseigenschaften teilen jedoch eine Schwäche: sie funktionieren nur für eine einzelne Programmiersprache oder eine einzelne Informationsflusseigenschaft. In dieser Arbeit verfolgen wir einen Ansatz basierend auf Temporallogiken, um eine einfache theoretische Basis für die Spezifikation von Informationsflusseigenschaften und deren Umsetzung zu erhalten. Wir analysieren den Zusammenhang von der Ausdrucksmächtigkeit von Spezifikationssprachen und dem algorithmischen Problem Spezifikationen für ein System zu überprüfen. Anhand einer Fallstudie im Bereich der Hardwaresicherheit zeigen wir, dass der Ansatz dazu geeignet ist eine breite Palette von bekannten und neuen Informationsflusseigenschaften mittels eines einzelnen Modellprüfungsalgorithmus zu beweisen. Das Kernproblem hierbei ist, dass wir in den üblichen Temporallogiken Informationsflusseigenschaften nicht ausdrücken können, es fehlt die Fähigkeit mehrere Ausführungen eines Systems miteinander zu vergleichen, was der gemeinsame Nenner von Informationsflusseigenschaften ist. Wir erweitern Temporallogiken daher um die Fähigkeit über mehrere Ausführungen zu quantifizieren und diese miteinander zu vergleichen. Der Hauptbeitrag auf der technischen Ebene ist ein Modellprüfungsalgorithmus und eine detaillierte Analyse der Komplexität des Modellprüfungsproblems. Wir geben einen Modellprüfungsalgorithmus an und beweisen, dass der Algorithmus asymptotisch optimal ist. Die Komplexitätsanalyse zeigt auch eine Klasse von Eigenschaften auf, welche viele der üblichen Informationsflusseigenschaften beinhaltet, und für welche der gegebene Algorithmus effizient ist (in NLOGSPACE in der Größe des Systems). Für diese Klasse von effizient überprüfbaren Eigenschaften diskutieren wir einen Ansatz bestehende Technologie zur Modellprüfung von Hardware für Informationsflusskontrolle wiederzuverwenden. Anhand einer Fallstudie zeigen wir, dass der Ansatz flexibel und effektiv eingesetzt werden kann. Desweiteren diskutieren wir zwei weitere Fallstudien, welche demonstrieren, dass die vorgeschlagene Erweiterung von Temporallogiken auch eingesetzt werden kann, um Eigenschaften für fehlerresistente Kodierungen und verteilte Protokolle zu beweisen, welche bisher nur Abstrakt betrachtet werden konnten
    corecore