52 research outputs found

    Runtime Analysis of the (1+(λ,λ))(1+(\lambda,\lambda)) Genetic Algorithm on Random Satisfiable 3-CNF Formulas

    Full text link
    The (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm, first proposed at GECCO 2013, showed a surprisingly good performance on so me optimization problems. The theoretical analysis so far was restricted to the OneMax test function, where this GA profited from the perfect fitness-distance correlation. In this work, we conduct a rigorous runtime analysis of this GA on random 3-SAT instances in the planted solution model having at least logarithmic average degree, which are known to have a weaker fitness distance correlation. We prove that this GA with fixed not too large population size again obtains runtimes better than Θ(nlogn)\Theta(n \log n), which is a lower bound for most evolutionary algorithms on pseudo-Boolean problems with unique optimum. However, the self-adjusting version of the GA risks reaching population sizes at which the intermediate selection of the GA, due to the weaker fitness-distance correlation, is not able to distinguish a profitable offspring from others. We show that this problem can be overcome by equipping the self-adjusting GA with an upper limit for the population size. Apart from sparse instances, this limit can be chosen in a way that the asymptotic performance does not worsen compared to the idealistic OneMax case. Overall, this work shows that the (1+(λ,λ))(1+(\lambda,\lambda)) GA can provably have a good performance on combinatorial search and optimization problems also in the presence of a weaker fitness-distance correlation.Comment: An extended abstract of this report will appear in the proceedings of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    The 1/5-th Rule with Rollbacks: On Self-Adjustment of the Population Size in the (1+(λ,λ))(1+(\lambda,\lambda)) GA

    Full text link
    Self-adjustment of parameters can significantly improve the performance of evolutionary algorithms. A notable example is the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm, where the adaptation of the population size helps to achieve the linear runtime on the OneMax problem. However, on problems which interfere with the assumptions behind the self-adjustment procedure, its usage can lead to performance degradation compared to static parameter choices. In particular, the one fifth rule, which guides the adaptation in the example above, is able to raise the population size too fast on problems which are too far away from the perfect fitness-distance correlation. We propose a modification of the one fifth rule in order to have less negative impact on the performance in scenarios when the original rule reduces the performance. Our modification, while still having a good performance on OneMax, both theoretically and in practice, also shows better results on linear functions with random weights and on random satisfiable MAX-SAT instances.Comment: 17 pages, 2 figures, 1 table. An extended two-page abstract of this work will appear in proceedings of the Genetic and Evolutionary Computation Conference, GECCO'1

    The (1+(λ,λ))(1+(\lambda,\lambda)) Genetic Algorithm for Permutations

    Full text link
    The (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm is a bright example of an evolutionary algorithm which was developed based on the insights from theoretical findings. This algorithm uses crossover, and it was shown to asymptotically outperform all mutation-based evolutionary algorithms even on simple problems like OneMax. Subsequently it was studied on a number of other problems, but all of these were pseudo-Boolean. We aim at improving this situation by proposing an adaptation of the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm to permutation-based problems. Such an adaptation is required, because permutations are noticeably different from bit strings in some key aspects, such as the number of possible mutations and their mutual dependence. We also present the first runtime analysis of this algorithm on a permutation-based problem called Ham whose properties resemble those of OneMax. On this problem, where the simple mutation-based algorithms have the running time of Θ(n2logn)\Theta(n^2 \log n) for problem size nn, the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm finds the optimum in O(n2)O(n^2) fitness queries. We augment this analysis with experiments, which show that this algorithm is also fast in practice.Comment: This contribution is a slightly extended version of the paper accepted to the GECCO 2020 workshop on permutation-based problem

    Engineering SAT Applications

    Get PDF
    Das Erfüllbarkeitsproblem der Aussagenlogik (SAT) ist nicht nur in der theoretischen Informatik ein grundlegendes Problem, da alle NP-vollständigen Probleme auf SAT zurückgeführt werden können. Durch die Entwicklung von sehr effizienten SAT Lösern sind in den vergangenen 15 Jahren auch eine Vielzahl von praktischen Anwendungsmöglichkeiten entwickelt worden. Zu den bekanntesten gehört die Verifikation von Hardware- und Software-Bausteinen. Bei der Berechnung von unerfüllbaren SAT-Problemen sind Entwickler und Anwender oftmals an einer Erklärung für die Unerfüllbarkeit interessiert. Eine Möglichkeit diese zu ermitteln ist die Berechnung von minimal unerfüllbaren Teilformeln. Es sind drei grundlegend verschiedene Strategien zur Berechnung dieser Teilformeln bekannt: mittels Einfügen von Klauseln in ein erfüllbares Teilproblem, durch Entfernen von Kauseln aus einem unerfüllbaren Teilproblem und eine Kombination der beiden erstgenannten Methoden. In der vorliegenden Arbeit entwickeln wir zuerst eine interaktive Variante der Strategie, die auf Entfernen von Klauseln basiert. Sie ermöglicht es den Anwendern interessante Bereiche des Suchraumes manuell zu erschließen und aussagekräftige Erklärung für die Unerfüllbarkeit zu ermitteln. Der theoretische Hintergrund, der für die interaktive Berechnung von minimal unerfüllbaren Teilformeln entwickelt wurde, um dem Benutzer des Prototyps unnötige Schritte in der Berechnung der Teilformeln zu ersparen werden im Anschluss für die automatische Aufzählung von mehreren minimal unerfüllbaren Teilformeln verwendet, um dort die aktuell schnellsten Algorithmen weiter zu verbessern. Die Idee dabei ist mehrere Klauseln zu einem Block zusammenzufassen. Wir zeigen, wie diese Blöcke die Berechnungen von minimal unerfüllbaren Teilformeln positiv beeinflussen können. Durch die Implementierung eines Prototypen, der auf den aktuellen Methoden basiert, konnten wir die Effektivität unserer entwickelten Ideen belegen. Nachdem wir im ersten Teil der Arbeit grundlegende Algorithmen, die bei unerfüllbaren SAT-Problemen angewendet werden, verbessert haben, wenden wir uns im zweiten Teil der Arbeit neuen Anwendungsmöglichkeiten für SAT zu. Zuerst steht dabei ein Problem aus der Bioinformatik im Mittelpunkt. Wir lösen das sogenannte Kompatibilitätproblem für evolutionäre Bäume mittels einer Kodierung als Erfüllbarkeitsproblem und zeigen anschließend, wie wir mithilfe dieser neuen Kodierung ein nah verwandtes Optimierungsproblem lösen können. Den von uns neu entwickelten Ansatz vergleichen wir im Anschluss mit den bisher effektivsten Ansätzen das Optmierungsproblem zu lösen. Wir konnten zeigen, dass wir für den überwiegenden Teil der getesteten Instanzen neue Bestwerte in der Berechnungszeit erreichen. Die zweite neue Anwendung von SAT ist ein Problem aus der Graphentheorie, bzw. dem Graphenzeichen. Durch eine schlichte, intuitive, aber dennoch effektive Formulierung war es uns möglich neue Resultate für das Book Embedding Problem zu ermitteln. Zum einen konnten wir eine nicht triviale untere Schranke von vier für die benötigte Seitenzahl von 1-planaren Graphen ermitteln. Zum anderen konnten wir zeigen, dass es nicht für jeden planaren Graphen möglich ist, eine Einbettung in drei Seiten mittels einer sogenannten Schnyder-Aufteilung in drei verschiedene Bäume zu berechnen

    Parallel black-box complexity with tail bounds

    Get PDF
    We propose a new black-box complexity model for search algorithms evaluating λ search points in parallel. The parallel unary unbiased black-box complexity gives lower bounds on the number of function evaluations every parallel unary unbiased black-box algorithm needs to optimise a given problem. It captures the inertia caused by offspring populations in evolutionary algorithms and the total computational effort in parallel metaheuristics. We present complexity results for LeadingOnes and OneMax. Our main result is a general performance limit: we prove that on every function every λ-parallel unary unbiased algorithm needs at least a certain number of evaluations (a function of problem size and λ) to find any desired target set of up to exponential size, with an overwhelming probability. This yields lower bounds for the typical optimisation time on unimodal and multimodal problems, for the time to find any local optimum, and for the time to even get close to any optimum. The power and versatility of this approach is shown for a wide range of illustrative problems from combinatorial optimisation. Our performance limits can guide parameter choice and algorithm design; we demonstrate the latter by presenting an optimal λ-parallel algorithm for OneMax that uses parallelism most effectively

    Efficient local search for Pseudo Boolean Optimization

    Get PDF
    Algorithms and the Foundations of Software technolog

    Searching for patterns in Conway's Game of Life

    Get PDF
    Conway’s Game of Life (Life) is a simple cellular automaton, discovered by John Conway in 1970, that exhibits complex emergent behavior. Life-enthusiasts have been looking for building blocks with specific properties (patterns) to answer unsolved problems in Life for the past five decades. Finding patterns in Life is difficult due to the large search space. Current search algorithms use an explorative approach based on the rules of the game, but this can only sample a small fraction of the search space. More recently, people have used Sat solvers to search for patterns. These solvers are not specifically tuned to this problem and thus waste a lot of time processing Life’s rules in an engine that does not understand them. We propose a novel Sat-based approach that replaces the binary tree used by traditional Sat solvers with a grid-based approach, complemented by an injection of Game of Life specific knowledge. This leads to a significant speedup in searching. As a fortunate side effect, our solver can be generalized to solve general Sat problems. Because it is grid-based, all manipulations are embarrassingly parallel, allowing implementation on massively parallel hardware

    Generalising weighted model counting

    Get PDF
    Given a formula in propositional or (finite-domain) first-order logic and some non-negative weights, weighted model counting (WMC) is a function problem that asks to compute the sum of the weights of the models of the formula. Originally used as a flexible way of performing probabilistic inference on graphical models, WMC has found many applications across artificial intelligence (AI), machine learning, and other domains. Areas of AI that rely on WMC include explainable AI, neural-symbolic AI, probabilistic programming, and statistical relational AI. WMC also has applications in bioinformatics, data mining, natural language processing, prognostics, and robotics. In this work, we are interested in revisiting the foundations of WMC and considering generalisations of some of the key definitions in the interest of conceptual clarity and practical efficiency. We begin by developing a measure-theoretic perspective on WMC, which suggests a new and more general way of defining the weights of an instance. This new representation can be as succinct as standard WMC but can also expand as needed to represent less-structured probability distributions. We demonstrate the performance benefits of the new format by developing a novel WMC encoding for Bayesian networks. We then show how existing WMC encodings for Bayesian networks can be transformed into this more general format and what conditions ensure that the transformation is correct (i.e., preserves the answer). Combining the strengths of the more flexible representation with the tricks used in existing encodings yields further efficiency improvements in Bayesian network probabilistic inference. Next, we turn our attention to the first-order setting. Here, we argue that the capabilities of practical model counting algorithms are severely limited by their inability to perform arbitrary recursive computations. To enable arbitrary recursion, we relax the restrictions that typically accompany domain recursion and generalise circuits (used to express a solution to a model counting problem) to graphs that are allowed to have cycles. These improvements enable us to find efficient solutions to counting fundamental structures such as injections and bijections that were previously unsolvable by any available algorithm. The second strand of this work is concerned with synthetic data generation. Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithm’s superiority over another. However, benchmarks are often limited and fail to reveal differences among the algorithms. First, we show how random instances of probabilistic logic programs (that typically use WMC algorithms for inference) can be generated using constraint programming. We also introduce a new constraint to control the independence structure of the underlying probability distribution and provide a combinatorial argument for the correctness of the constraint model. This model allows us to, for the first time, experimentally investigate inference algorithms on more than just a handful of instances. Second, we introduce a random model for WMC instances with a parameter that influences primal treewidth—the parameter most commonly used to characterise the difficulty of an instance. We show that the easy-hard-easy pattern with respect to clause density is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also demonstrate that all WMC algorithms scale exponentially with respect to primal treewidth, although at differing rates

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers
    corecore