75 research outputs found

    Parallelism with limited nondeterminism

    Get PDF
    Computational complexity theory studies which computational problems can be solved with limited access to resources. The past fifty years have seen a focus on the relationship between intractable problems and efficient algorithms. However, the relationship between inherently sequential problems and highly parallel algorithms has not been as well studied. Are there efficient but inherently sequential problems that admit some relaxed form of highly parallel algorithm? In this dissertation, we develop the theory of structural complexity around this relationship for three common types of computational problems. Specifically, we show tradeoffs between time, nondeterminism, and parallelizability. By clearly defining the notions and complexity classes that capture our intuition for parallelizable and sequential problems, we create a comprehensive framework for rigorously proving parallelizability and non-parallelizability of computational problems. This framework provides the means to prove whether otherwise tractable problems can be effectively parallelized, a need highlighted by the current growth of multiprocessor systems. The views adopted by this dissertation—alternate approaches to solving sequential problems using approximation, limited nondeterminism, and parameterization—can be applied practically throughout computer science

    Nonapproximability Results for Partially Observable Markov Decision Processes

    Full text link
    We show that for several variations of partially observable Markov decision processes, polynomial-time algorithms for finding control policies are unlikely to or simply don't have guarantees of finding policies within a constant factor or a constant summand of optimal. Here "unlikely" means "unless some complexity classes collapse," where the collapses considered are P=NP, P=PSPACE, or P=EXP. Until or unless these collapses are shown to hold, any control-policy designer must choose between such performance guarantees and efficient computation

    Parameterized complexity and polynomial-time approximation schemes

    Get PDF
    According to the theory of NPcompleteness, many problems that have important realworld applications are NPhard. This excludes the possibility of solving them in polynomial time unless P=NP. A number of approaches have been proposed in dealing with NPhard problems, among them are approximation algorithms and parameterized algorithms. The study of approximation algorithms tries to find good enough solutions instead of optimal solutions in polynomial time, while parameterized algorithms try to give exact solutions when a natural parameter is small. In this thesis, we study the structural properties of parameterized computation and approximation algorithms for NP optimization problems. In particular, we investigate the relationship between parameterized complexity and polynomialtime approximation scheme (PTAS) for NP optimization problems. We give nice characterizations for two important subclasses in PTAS: Fully Polynomial Time Approximation Scheme (FPTAS) and Effcient Polynomial Time Approximation Scheme (EPTAS), using the theory of parameterized complexity. Our characterization of the class FPTAS has its advantages over the former characterizations, and our characterization of EPTAS is the first systematic investigation of this new but important approximation class. We develop new techniques to derive strong computational lower bounds for certain parameterized problems based on the theory of parameterized complexity. For example, we prove that unless an unlikely collapse occurs in parameterized complexity theory, the clique problem could not be solved in time O(f (k)no(k)) for any function f . This lower bound matches the upper bound of the trivial algorithm that simply enumerates and checks all subsets of k vertices in the given graph of n vertices. We then extend our techniques to derive computational lower bounds for PTAS and EPTAS algorithms of NP optimization problems. We prove that certain NP optimization problems with known PTAS algorithms have no PTAS algorithms of running time O(f (1/Epsilon)no(1/Epsilon)) for any function f . Therefore, for these NP optimization problems, although theoretically they can be approximated in polynomial time to an arbitrarily small error bound Epsilon, they have no practically effective approximation algorithms for small error bound Epsilon. To our knowledge, this is the first time such lower bound results have been derived for PTAS algorithms. This seems to open a new direction for the study of computational lower bounds on the approximability of NP optimization problems

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF

    Strengths and Limitations of Linear Programming Relaxations

    Get PDF
    Many of the currently best-known approximation algorithms for NP-hard optimization problems are based on Linear Programming (LP) and Semi-definite Programming (SDP) relaxations. Given its power, this class of algorithms seems to contain the most favourable candidates for outperforming the current state-of-the-art approximation guarantees for NP-hard problems, for which there still exists a gap between the inapproximability results and the approximation guarantees that we know how to achieve in polynomial time. In this thesis, we address both the power and the limitations of these relaxations, as well as the connection between the shortcomings of these relaxations and the inapproximability of the underlying problem. In the first part, we study the limitations of LP relaxations of well-known graph problems such as the Vertex Cover problem and the Independent Set problem. We prove that any small LP relaxation for the aforementioned problems, cannot have an integrality gap strictly better than 22 and ω(1)\omega(1), respectively. Furthermore, our lower bound for the Independent Set problem also holds for any SDP relaxation. Prior to our work, it was only known that such LP relaxations cannot have an integrality gap better than 1.51.5 for the Vertex Cover Problem, and better than 22 for the Independent Set problem. In the second part, we study the so-called knapsack cover inequalities that are used in the current best relaxations for numerous combinatorial optimization problems of covering type. In spite of their widespread use, these inequalities yield LP relaxations of exponential size, over which it is not known how to optimize exactly in polynomial time. We address this issue and obtain LP relaxations of quasi-polynomial size that are at least as strong as that given by the knapsack cover inequalities. In the last part, we show a close connection between structural hardness for k-partite graphs and tight inapproximability results for scheduling problems with precedence constraints. This connection is inspired by a family of integrality gap instances of a certain LP relaxation. Assuming the hardness of an optimization problem on k-partite graphs, we obtain a hardness of 2−ε2-\varepsilon for the problem of minimizing the makespan for scheduling with preemption on identical parallel machines, and a super constant inapproximability for the problem of scheduling on related parallel machines. Prior to this result, it was only known that the first problem does not admit a PTAS, and the second problem is NP-hard to approximate within a factor strictly better than 2, assuming the Unique Games Conjecture

    Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective

    Full text link
    Recent studies have discovered that Chain-of-Thought prompting (CoT) can dramatically improve the performance of Large Language Models (LLMs), particularly when dealing with complex tasks involving mathematics or reasoning. Despite the enormous empirical success, the underlying mechanisms behind CoT and how it unlocks the potential of LLMs remain elusive. In this paper, we take a first step towards theoretically answering these questions. Specifically, we examine the capacity of LLMs with CoT in solving fundamental mathematical and decision-making problems. We start by giving an impossibility result showing that any bounded-depth Transformer cannot directly output correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length. In contrast, we then prove by construction that autoregressive Transformers of a constant size suffice to solve both tasks by generating CoT derivations using a commonly-used math language format. Moreover, we show LLMs with CoT are capable of solving a general class of decision-making problems known as Dynamic Programming, thus justifying its power in tackling complex real-world tasks. Finally, extensive experiments on four tasks show that, while Transformers always fail to predict the answers directly, they can consistently learn to generate correct solutions step-by-step given sufficient CoT demonstrations.Comment: 33 page

    Noise in Quantum and Classical Computation & Non-locality

    Get PDF
    Quantum computers seem to have capabilities which go beyond those of classical computers. A particular example which is important for cryptography is that quantum computers are able to factor numbers much faster than what seems possible on classical machines. In order to actually build quantum computers it is necessary to build sufficiently accurate hardware, which is a big challenge. In part 1 of this thesis we prove lower bounds on the accuracy of the hardware needed to do quantum computation. We also present a similar result for classical computers. One resource that quantum computers have but classical computers do not have is entanglement. In Part 2 of this thesis we study certain general aspects of entanglement in terms of quantum XOR games and non-locality
    • …
    corecore