693 research outputs found
Bit-Vector Model Counting using Statistical Estimation
Approximate model counting for bit-vector SMT formulas (generalizing \#SAT)
has many applications such as probabilistic inference and quantitative
information-flow security, but it is computationally difficult. Adding random
parity constraints (XOR streamlining) and then checking satisfiability is an
effective approximation technique, but it requires a prior hypothesis about the
model count to produce useful results. We propose an approach inspired by
statistical estimation to continually refine a probabilistic estimate of the
model count for a formula, so that each XOR-streamlined query yields as much
information as possible. We implement this approach, with an approximate
probability model, as a wrapper around an off-the-shelf SMT solver or SAT
solver. Experimental results show that the implementation is faster than the
most similar previous approaches which used simpler refinement strategies. The
technique also lets us model count formulas over floating-point constraints,
which we demonstrate with an application to a vulnerability in differential
privacy mechanisms
Model-counting approaches for nonlinear numerical constraints
Model counting is of central importance in quantitative rea- soning about systems. Examples include computing the probability that a system successfully accomplishes its task without errors, and measuring the number of bits leaked by a system to an adversary in Shannon entropy. Most previous work in those areas demonstrated their analysis on pro- grams with linear constraints, in which cases model counting is polynomial time. Model counting for nonlinear constraints is notoriously hard, and thus programs with nonlinear constraints are not well-studied. This paper surveys state-of-the-art techniques and tools for model counting with respect to SMT constraints, modulo the bitvector theory, since this theory is decidable, and it can express nonlinear constraints that arise from the analysis of computer programs. We integrate these techniques within the Symbolic Pathfinder platform and evaluate them on difficult nonlinear constraints generated from the analysis of cryptographic functions
A Unified Framework for Probabilistic Verification of AI Systems via Weighted Model Integration
The probabilistic formal verification (PFV) of AI systems is in its infancy.
So far, approaches have been limited to ad-hoc algorithms for specific classes
of models and/or properties.
We propose a unifying framework for the PFV of AI systems based onWeighted
Model Integration (WMI), which allows to frame the problem in very general
terms.
Crucially, this reduction enables the verification of many properties of
interest, like fairness, robustness or monotonicity, over a wide range of
machine learning models, without making strong distributional assumptions.
We support the generality of the approach by solving multiple verification
tasks with a single, off-the-shelf WMI solver, then discuss the scalability
challenges and research directions related to this promising framework
- …