419 research outputs found
Balancing Scalability and Uniformity in SAT Witness Generator
Constrained-random simulation is the predominant approach used in the
industry for functional verification of complex digital designs. The
effectiveness of this approach depends on two key factors: the quality of
constraints used to generate test vectors, and the randomness of solutions
generated from a given set of constraints. In this paper, we focus on the
second problem, and present an algorithm that significantly improves the
state-of-the-art of (almost-)uniform generation of solutions of large Boolean
constraints. Our algorithm provides strong theoretical guarantees on the
uniformity of generated solutions and scales to problems involving hundreds of
thousands of variables.Comment: This is a full version of DAC 2014 pape
Distribution-Aware Sampling and Weighted Model Counting for SAT
Given a CNF formula and a weight for each assignment of values to variables,
two natural problems are weighted model counting and distribution-aware
sampling of satisfying assignments. Both problems have a wide variety of
important applications. Due to the inherent complexity of the exact versions of
the problems, interest has focused on solving them approximately. Prior work in
this area scaled only to small problems in practice, or failed to provide
strong theoretical guarantees, or employed a computationally-expensive maximum
a posteriori probability (MAP) oracle that assumes prior knowledge of a
factored representation of the weight distribution. We present a novel approach
that works with a black-box oracle for weights of assignments and requires only
an {\NP}-oracle (in practice, a SAT-solver) to solve both the counting and
sampling problems. Our approach works under mild assumptions on the
distribution of weights of satisfying assignments, provides strong theoretical
guarantees, and scales to problems involving several thousand variables. We
also show that the assumptions can be significantly relaxed while improving
computational efficiency if a factored representation of the weights is known.Comment: This is a full version of AAAI 2014 pape
Sampling Techniques for Boolean Satisfiability
Boolean satisfiability ({\SAT}) has played a key role in diverse areas
spanning testing, formal verification, planning, optimization, inferencing and
the like. Apart from the classical problem of checking boolean satisfiability,
the problems of generating satisfying uniformly at random, and of counting the
total number of satisfying assignments have also attracted significant
theoretical and practical interest over the years. Prior work offered heuristic
approaches with very weak or no guarantee of performance, and theoretical
approaches with proven guarantees, but poor performance in practice.
We propose a novel approach based on limited-independence hashing that allows
us to design algorithms for both problems, with strong theoretical guarantees
and scalability extending to thousands of variables. Based on this approach, we
present two practical algorithms, {\UniformWitness}: a near uniform generator
and {\approxMC}: the first scalable approximate model counter, along with
reference implementations. Our algorithms work by issuing polynomial calls to
{\SAT} solver. We demonstrate scalability of our algorithms over a large set of
benchmarks arising from different application domains.Comment: MS Thesis submitted to Rice Universit
Probabilistic Model Counting with Short XORs
The idea of counting the number of satisfying truth assignments (models) of a
formula by adding random parity constraints can be traced back to the seminal
work of Valiant and Vazirani, showing that NP is as easy as detecting unique
solutions. While theoretically sound, the random parity constraints in that
construction have the following drawback: each constraint, on average, involves
half of all variables. As a result, the branching factor associated with
searching for models that also satisfy the parity constraints quickly gets out
of hand. In this work we prove that one can work with much shorter parity
constraints and still get rigorous mathematical guarantees, especially when the
number of models is large so that many constraints need to be added. Our work
is based on the realization that the essential feature for random systems of
parity constraints to be useful in probabilistic model counting is that the
geometry of their set of solutions resembles an error-correcting code.Comment: To appear in SAT 1
Circuit Testing Based on Fuzzy Sampling with BDD Bases
Fuzzy testing of integrated circuits is an established technique. Current approaches generate an approximately uniform random sample from a translation of the circuit to Boolean logic. These approaches have serious scalability issues, which become more pressing with the ever-increasing size of circuits. We propose using a base of binary decision diagrams to sample the translations as a soft computing approach. Uniformity is guaranteed by design and scalability is greatly improved. We test our approach against five other state-of-the-art tools and find our tool to outperform all of them, both in terms of performance and scalability
Uniform and scalable sampling of highly configurable systems
Many analyses on confgurable software systems are intractable when confronted with
colossal and highly-constrained confguration spaces. These analyses could instead use
statistical inference, where a tractable sample accurately predicts results for the entire
space. To do so, the laws of statistical inference requires each member of the population
to be equally likely to be included in the sample, i.e., the sampling process needs to be
“uniform”. SAT-samplers have been developed to generate uniform random samples at a
reasonable computational cost. However, there is a lack of experimental validation over
colossal spaces to show whether the samplers indeed produce uniform samples or not. This
paper (i) proposes a new sampler named BDDSampler, (ii) presents a new statistical test
to verify sampler uniformity, and (iii) reports the evaluation of BDDSampler and fve
other state-of-the-art samplers: KUS, QuickSampler, Smarch, Spur, and Unigen2. Our
experimental results show only BDDSampler satisfes both scalability and uniformity.Universidad Nacional de Educación a Distancia (UNED) OPTIVAC 096-034091 2021V/PUNED/008Ministerio de Ciencia, Innovación y Universidades RTI2018-101204-B-C22 (OPHELIA)Comunidad Autónoma de Madrid ROBOCITY2030-DIH-CM S2018/NMT-4331Agencia Estatal de Investigación TIN2017-90644-RED
Uniform and scalable SAT-sampling for configurable systems
Several relevant analyses on configurable software systems remain intractable because they require examining vast and highly-constrained configuration spaces. Those analyses could be addressed through statistical inference, i.e., working with a much more tractable sample that later supports generalizing the results obtained to the entire configuration space. To make this possible, the laws of statistical inference impose an indispensable requirement: each member of the population must be equally likely to be included in the sample, i.e., the sampling process needs to be "uniform". Various SAT-samplers have been developed for generating uniform random samples at a reasonable computational cost. Unfortunately, there is a lack of experimental validation over large configuration models to show whether the samplers indeed produce genuine uniform samples or not. This paper (i) presents a new statistical test to verify to what extent samplers accomplish uniformity and (ii) reports the evaluation of four state-of-the-art samplers: Spur, QuickSampler, Unigen2, and Smarch. According to our experimental results, only Spur satisfies both scalability and uniformity.Ministerio de Ciencia, Innovación y Universidades VITAL-3D DPI2016-77677-PMinisterio de Ciencia, Innovación y Universidades OPHELIA RTI2018-101204-B-C22Comunidad Autónoma de Madrid CAM RoboCity2030 S2013/MIT-2748Agencia Estatal de Investigación TIN2017-90644-RED
- …