161 research outputs found

    Commutative Algorithms Approximate the LLL-distribution

    Get PDF
    Following the groundbreaking Moser-Tardos algorithm for the Lovasz Local Lemma (LLL), a series of works have exploited a key ingredient of the original analysis, the witness tree lemma, in order to: derive deterministic, parallel and distributed algorithms for the LLL, to estimate the entropy of the output distribution, to partially avoid bad events, to deal with super-polynomially many bad events, and even to devise new algorithmic frameworks. Meanwhile, a parallel line of work, has established tools for analyzing stochastic local search algorithms motivated by the LLL that do not fall within the Moser-Tardos framework. Unfortunately, the aforementioned results do not transfer to these more general settings. Mainly, this is because the witness tree lemma, provably, no longer holds. Here we prove that for commutative algorithms, a class recently introduced by Kolmogorov and which captures the vast majority of LLL applications, the witness tree lemma does hold. Armed with this fact, we extend the main result of Haeupler, Saha, and Srinivasan to commutative algorithms, establishing that the output of such algorithms well-approximates the LLL-distribution, i.e., the distribution obtained by conditioning on all bad events being avoided, and give several new applications. For example, we show that the recent algorithm of Molloy for list coloring number of sparse, triangle-free graphs can output exponential many list colorings of the input graph

    LIPIcs

    Get PDF
    The LovĂĄsz Local Lemma (LLL) is a powerful tool in probabilistic combinatorics which can be used to establish the existence of objects that satisfy certain properties. The breakthrough paper of Moser and Tardos and follow-up works revealed that the LLL has intimate connections with a class of stochastic local search algorithms for finding such desirable objects. In particular, it can be seen as a sufficient condition for this type of algorithms to converge fast. Besides conditions for existence of and fast convergence to desirable objects, one may naturally ask further questions regarding properties of these algorithms. For instance, "are they parallelizable?", "how many solutions can they output?", "what is the expected "weight" of a solution?", etc. These questions and more have been answered for a class of LLL-inspired algorithms called commutative. In this paper we introduce a new, very natural and more general notion of commutativity (essentially matrix commutativity) which allows us to show a number of new refined properties of LLL-inspired local search algorithms with significantly simpler proofs

    Quantum Computation, Markov Chains and Combinatorial Optimisation

    Get PDF
    This thesis addresses two questions related to the title, Quantum Computation, Markov Chains and Combinatorial Optimisation. The first question involves an algorithmic primitive of quantum computation, quantum walks on graphs, and its relation to Markov Chains. Quantum walks have been shown in certain cases to mix faster than their classical counterparts. Lifted Markov chains, consisting of a Markov chain on an extended state space which is projected back down to the original state space, also show considerable speedups in mixing time. We design a lifted Markov chain that in some sense simulates any quantum walk. Concretely, we construct a lifted Markov chain on a connected graph G with n vertices that mixes exactly to the average mixing distribution of a quantum walk on G. Moreover, the mixing time of this chain is the diameter of G. We then consider practical consequences of this result. In the second part of this thesis we address a classic unsolved problem in combinatorial optimisation, graph isomorphism. A theorem of Kozen states that two graphs on n vertices are isomorphic if and only if there is a clique of size n in the weak modular product of the two graphs. Furthermore, a straightforward corollary of this theorem and Lovász’s sandwich theorem is if the weak modular product between two graphs is perfect, then checking if the graphs are isomorphic is polynomial in n. We enumerate the necessary and sufficient conditions for the weak modular product of two simple graphs to be perfect. Interesting cases include complete multipartite graphs and disjoint unions of cliques. We find that all perfect weak modular products have factors that fall into classes of graphs for which testing isomorphism is already known to be polynomial in the number of vertices. Open questions and further research directions are discussed

    Acta Cybernetica : Volume 20. Number 4.

    Get PDF

    Interior point methods and simulated annealing for nonsymmetric conic optimization

    Get PDF
    This thesis explores four methods for convex optimization. The first two are an interior point method and a simulated annealing algorithm that share a theoretical foundation. This connection is due to the interior point method’s use of the so-called entropic barrier, whose derivatives can be approximated through sampling. Here, the sampling will be carried out with a technique known as hit-and-run. By carefully analyzing the properties of hit-and-run sampling, it is shown that both the interior point method and the simulated annealing algorithm can solve a convex optimization problem in the membership oracle setting. The number of oracle calls made by these methods is bounded by a polynomial in the input size. The third method is an analytic center cutting plane method that shows promising performance for copositive optimization. It outperforms the first two methods by a significant margin on the problem of separating a matrix from the completely positive cone. The final method is based on Mosek’s algorithm for nonsymmetric conic optimization. With their scaling matrix, search direction, and neighborhood, we define a method that converges to a near-optimal solution in polynomial time

    Learning with Structured Sparsity: From Discrete to Convex and Back.

    Get PDF
    In modern-data analysis applications, the abundance of data makes extracting meaningful information from it challenging, in terms of computation, storage, and interpretability. In this setting, exploiting sparsity in data has been essential to the development of scalable methods to problems in machine learning, statistics and signal processing. However, in various applications, the input variables exhibit structure beyond simple sparsity. This motivated the introduction of structured sparsity models, which capture such sophisticated structures, leading to a significant performance gains and better interpretability. Structured sparse approaches have been successfully applied in a variety of domains including computer vision, text processing, medical imaging, and bioinformatics. The goal of this thesis is to improve on these methods and expand their success to a wider range of applications. We thus develop novel methods to incorporate general structure a priori in learning problems, which balance computational and statistical efficiency trade-offs. To achieve this, our results bring together tools from the rich areas of discrete and convex optimization. Applying structured sparsity approaches in general is challenging because structures encountered in practice are naturally combinatorial. An effective approach to circumvent this computational challenge is to employ continuous convex relaxations. We thus start by introducing a new class of structured sparsity models, able to capture a large range of structures, which admit tight convex relaxations amenable to efficient optimization. We then present an in-depth study of the geometric and statistical properties of convex relaxations of general combinatorial structures. In particular, we characterize which structure is lost by imposing convexity and which is preserved. We then focus on the optimization of the convex composite problems that result from the convex relaxations of structured sparsity models. We develop efficient algorithmic tools to solve these problems in a non-Euclidean setting, leading to faster convergence in some cases. Finally, to handle structures that do not admit meaningful convex relaxations, we propose to use, as a heuristic, a non-convex proximal gradient method, efficient for several classes of structured sparsity models. We further extend this method to address a probabilistic structured sparsity model, we introduce to model approximately sparse signals
    • 

    corecore