18 research outputs found

    Uniform and scalable SAT-sampling for configurable systems

    Get PDF
    Several relevant analyses on configurable software systems remain intractable because they require examining vast and highly-constrained configuration spaces. Those analyses could be addressed through statistical inference, i.e., working with a much more tractable sample that later supports generalizing the results obtained to the entire configuration space. To make this possible, the laws of statistical inference impose an indispensable requirement: each member of the population must be equally likely to be included in the sample, i.e., the sampling process needs to be "uniform". Various SAT-samplers have been developed for generating uniform random samples at a reasonable computational cost. Unfortunately, there is a lack of experimental validation over large configuration models to show whether the samplers indeed produce genuine uniform samples or not. This paper (i) presents a new statistical test to verify to what extent samplers accomplish uniformity and (ii) reports the evaluation of four state-of-the-art samplers: Spur, QuickSampler, Unigen2, and Smarch. According to our experimental results, only Spur satisfies both scalability and uniformity.Ministerio de Ciencia, Innovación y Universidades VITAL-3D DPI2016-77677-PMinisterio de Ciencia, Innovación y Universidades OPHELIA RTI2018-101204-B-C22Comunidad Autónoma de Madrid CAM RoboCity2030 S2013/MIT-2748Agencia Estatal de Investigación TIN2017-90644-RED

    Complications for Computational Experiments from Modern Processors

    Get PDF
    In this paper, we revisit the approach to empirical experiments for combinatorial solvers. We provide a brief survey on tools that can help to make empirical work easier. We illustrate origins of uncertainty in modern hardware and show how strong the influence of certain aspects of modern hardware and its experimental setup can be in an actual experimental evaluation. More specifically, there can be situations where (i) two different researchers run a reasonable-looking experiment comparing the same solvers and come to different conclusions and (ii) one researcher runs the same experiment twice on the same hardware and reaches different conclusions based upon how the hardware is configured and used. We investigate these situations from a hardware perspective. Furthermore, we provide an overview on standard measures, detailed explanations on effects, potential errors, and biased suggestions for useful tools. Alongside the tools, we discuss their feasibility as experiments often run on clusters to which the experimentalist has only limited access. Our work sheds light on a number of benchmarking-related issues which could be considered to be folklore or even myths

    Logic Synthesis for Established and Emerging Computing

    Get PDF
    Logic synthesis is an enabling technology to realize integrated computing systems, and it entails solving computationally intractable problems through a plurality of heuristic techniques. A recent push toward further formalization of synthesis problems has shown to be very useful toward both attempting to solve some logic problems exactly--which is computationally possible for instances of limited size today--as well as creating new and more powerful heuristics based on problem decomposition. Moreover, technological advances including nanodevices, optical computing, and quantum and quantum cellular computing require new and specific synthesis flows to assess feasibility and scalability. This review highlights recent progress in logic synthesis and optimization, describing models, data structures, and algorithms, with specific emphasis on both design quality and emerging technologies. Example applications and results of novel techniques to established and emerging technologies are reported

    Exploiting Satisfiability Solvers for Efficient Logic Synthesis

    Get PDF
    Logic synthesis is an important part of electronic design automation (EDA) flows, which enable the implementation of digital systems. As the design size and complexity increase, the data structures and algorithms for logic synthesis must adapt and improve in order to keep pace and to maintain acceptable runtime and high-quality results. Large circuits were often represented using binary decision diagrams (BDDs) that were rapidly adopted by logic synthesis tools beginning in the 1980s. Nowadays, BDD-based algorithms are still enhanced, but the possibilities for improvement are somewhat saturated after some 35 years of research. Alternatively, the first EDA applications that exploit Boolean satisfiability (SAT) were developed in the 1990s. Despite the worst-case exponential runtime of SAT solvers, rapid progress in their performance enabled the creation of efficient SAT-based algorithms. Yet, logic synthesis started using SAT solvers more diffusely only in the last decade. Therefore, thorough research is still required both for exploiting SAT solvers and for encoding logic synthesis problems into SAT. Our main goal in this thesis is to facilitate and promote the further integration of SAT solvers into EDA by proposing and evaluating novel SAT-based algorithms that can be used as building blocks in logic synthesis tools. First, we propose a rapid algorithm for LEXSAT, which generates satisfying assignments in lexicographic order. We show that LEXSAT can bring canonicity, which guarantees the generation of unique results, when using SAT solvers in EDA applications. Next, we present a new SAT-based algorithm that progressively generates irredundant sums of products (SOPs), which still play a crucial role in many logic synthesis tools. Using LEXSAT, for the first time, we can generate canonical SAT-based SOPs that, much like BDD-based SOPs, are unique for a given function and variable order but could relax canonicity in order to improve speed and scalability. Unlike BDDs, due to its progressive nature, our algorithm can generate partial SOPs for applications that can work with incomplete circuit functionality. It is noteworthy that both LEXSAT and the SAT-based SOPs are applicable beyond logic synthesis and EDA. Finally, we focus on resubstitution, which reimplements a given Boolean function as a new function that depends on a set of existing functions called divisors. We propose the carving interpolation algorithm that, unlike the traditional Craig interpolation, forces the use of a specific divisor as an input of the new function. This is particularly useful for global circuit restructuring and for some synthesis-based engineering change order (ECO) algorithms. Furthermore, we compare two existing SAT-based methodologies for resubstitution, which are used for post-mapping logic optimisation. The first methodology combines SAT-based functional dependency checking and Craig interpolation that are also used for our carving interpolation; the second methodology is based on cube enumeration and is similar to the SAT-based SOP generation. The initial implementations of our novel SAT-based algorithms offer either better performance or new features, or both, compared to their state-of-the-art versions. As the results indicate, a further thorough development of SAT-based algorithms for logic synthesis, like the one performed for BDDs in the past, can help overcome existing limitations and keep up with growing designs and design complexity

    Design, implementation and evaluation of a distributed CDCL framework

    Get PDF
    The primary subject of this dissertation is practically solving instances of the Boolean satisfiability problem (SAT) that arise from industrial applications. The invention of the conflict-driven clause-learning (CDCL) algorithm led to enormous progress in this field. CDCL has been augmented with effective pre- and inprocessing techniques that boost its effectiveness. While a considerable amount of work has been done on applying shared-memory parallelism to enhance the performance of CDCL, solving SAT on distributed architectures is studied less thoroughly. In this work, we develop a distributed, CDCL-based framework for SAT solving. This framework consists of three main components: 1. An implementation of the CDCL algorithm that we have written from scratch, 2. a novel, parallel SAT algorithm that builds upon this CDCL implementation and 3. a collection of parallel simplification techniques for SAT instances. We call our resulting framework satUZK; our parallel solving algorithm is called the distributed divide-and-conquer (DDC) algorithm. The DDC algorithm employs a parallel lookahead procedure to dynamically partition the search space. Load balancing is used to ensure that all computational resources are utilized during lookahead. This procedure results in a divide-and-conquer tree that is distributed over all processors. Individual threads are routed through this tree until they arrive at unsolved leaf vertices. Upon arrival, the lookahead procedure is invoked again or the leaf vertex is solved via CDCL. Several extensions to the DDC algorithm are proposed. These include clause sharing and a scheme to locally adjust the LBD score relative to the current search tree vertex. LBD is a measure for the usefulness of clauses that participate in a CDCL search. We evaluate our DDC algorithm empirically and benchmark it against the best distributed SAT algorithms. In this experiment, our DDC algorithm is faster than other distributed, state-of-the-art solvers and solves at least as many instances. In addition to running a parallel algorithm for SAT solving we also consider parallel simplifcation. Here, we first develop a theoretical foundation that allows us to prove the correctness of parallel simplification techniques. Using this as a basis, we examine established simplification algorithms for their parallelizability. It turns out that several well-known simplification techniques can be parallelized efficiently. We provide parallel implementation of the techniques and test their effectiveness in empirical experiments. This evaluation finds several combinations of simplification techniques that can solve instances which could not be solved by the DDC algorithm alone

    Using an Algorithm Portfolio to Solve Sokoban

    Get PDF
    corecore