9,366 research outputs found
Incremental QBF Solving
We consider the problem of incrementally solving a sequence of quantified
Boolean formulae (QBF). Incremental solving aims at using information learned
from one formula in the process of solving the next formulae in the sequence.
Based on a general overview of the problem and related challenges, we present
an approach to incremental QBF solving which is application-independent and
hence applicable to QBF encodings of arbitrary problems. We implemented this
approach in our incremental search-based QBF solver DepQBF and report on
implementation details. Experimental results illustrate the potential benefits
of incremental solving in QBF-based workflows.Comment: revision (camera-ready, to appear in the proceedings of CP 2014,
LNCS, Springer
Optimization in Knowledge-Intensive Crowdsourcing
We present SmartCrowd, a framework for optimizing collaborative
knowledge-intensive crowdsourcing. SmartCrowd distinguishes itself by
accounting for human factors in the process of assigning tasks to workers.
Human factors designate workers' expertise in different skills, their expected
minimum wage, and their availability. In SmartCrowd, we formulate task
assignment as an optimization problem, and rely on pre-indexing workers and
maintaining the indexes adaptively, in such a way that the task assignment
process gets optimized both qualitatively, and computation time-wise. We
present rigorous theoretical analyses of the optimization problem and propose
optimal and approximation algorithms. We finally perform extensive performance
and quality experiments using real and synthetic data to demonstrate that
adaptive indexing in SmartCrowd is necessary to achieve efficient high quality
task assignment.Comment: 12 page
Enhancing Reuse of Constraint Solutions to Improve Symbolic Execution
Constraint solution reuse is an effective approach to save the time of
constraint solving in symbolic execution. Most of the existing reuse approaches
are based on syntactic or semantic equivalence of constraints; e.g. the Green
framework is able to reuse constraints which have different representations but
are semantically equivalent, through canonizing constraints into syntactically
equivalent normal forms. However, syntactic/semantic equivalence is not a
necessary condition for reuse--some constraints are not syntactically or
semantically equivalent, but their solutions still have potential for reuse.
Existing approaches are unable to recognize and reuse such constraints.
In this paper, we present GreenTrie, an extension to the Green framework,
which supports constraint reuse based on the logical implication relations
among constraints. GreenTrie provides a component, called L-Trie, which stores
constraints and solutions into tries, indexed by an implication partial order
graph of constraints. L-Trie is able to carry out logical reduction and logical
subset and superset querying for given constraints, to check for reuse of
previously solved constraints. We report the results of an experimental
assessment of GreenTrie against the original Green framework, which shows that
our extension achieves better reuse of constraint solving result and saves
significant symbolic execution time.Comment: this paper has been submitted to conference ISSTA 201
Synthesizing Short-Circuiting Validation of Data Structure Invariants
This paper presents incremental verification-validation, a novel approach for
checking rich data structure invariants expressed as separation logic
assertions. Incremental verification-validation combines static verification of
separation properties with efficient, short-circuiting dynamic validation of
arbitrarily rich data constraints. A data structure invariant checker is an
inductive predicate in separation logic with an executable interpretation; a
short-circuiting checker is an invariant checker that stops checking whenever
it detects at run time that an assertion for some sub-structure has been fully
proven statically. At a high level, our approach does two things: it statically
proves the separation properties of data structure invariants using a static
shape analysis in a standard way but then leverages this proof in a novel
manner to synthesize short-circuiting dynamic validation of the data
properties. As a consequence, we enable dynamic validation to make up for
imprecision in sound static analysis while simultaneously leveraging the static
verification to make the remaining dynamic validation efficient. We show
empirically that short-circuiting can yield asymptotic improvements in dynamic
validation, with low overhead over no validation, even in cases where static
verification is incomplete
Avoiding Unnecessary Information Loss: Correct and Efficient Model Synchronization Based on Triple Graph Grammars
Model synchronization, i.e., the task of restoring consistency between two
interrelated models after a model change, is a challenging task. Triple Graph
Grammars (TGGs) specify model consistency by means of rules that describe how
to create consistent pairs of models. These rules can be used to automatically
derive further rules, which describe how to propagate changes from one model to
the other or how to change one model in such a way that propagation is
guaranteed to be possible. Restricting model synchronization to these derived
rules, however, may lead to unnecessary deletion and recreation of model
elements during change propagation. This is inefficient and may cause
unnecessary information loss, i.e., when deleted elements contain information
that is not represented in the second model, this information cannot be
recovered easily. Short-cut rules have recently been developed to avoid
unnecessary information loss by reusing existing model elements. In this paper,
we show how to automatically derive (short-cut) repair rules from short-cut
rules to propagate changes such that information loss is avoided and model
synchronization is accelerated. The key ingredients of our rule-based model
synchronization process are these repair rules and an incremental pattern
matcher informing about suitable applications of them. We prove the termination
and the correctness of this synchronization process and discuss its
completeness. As a proof of concept, we have implemented this synchronization
process in eMoflon, a state-of-the-art model transformation tool with inherent
support of bidirectionality. Our evaluation shows that repair processes based
on (short-cut) repair rules have considerably decreased information loss and
improved performance compared to former model synchronization processes based
on TGGs.Comment: 33 pages, 20 figures, 3 table
Incremental Cardinality Constraints for MaxSAT
Maximum Satisfiability (MaxSAT) is an optimization variant of the Boolean
Satisfiability (SAT) problem. In general, MaxSAT algorithms perform a
succession of SAT solver calls to reach an optimum solution making extensive
use of cardinality constraints. Many of these algorithms are non-incremental in
nature, i.e. at each iteration the formula is rebuilt and no knowledge is
reused from one iteration to another. In this paper, we exploit the knowledge
acquired across iterations using novel schemes to use cardinality constraints
in an incremental fashion. We integrate these schemes with several MaxSAT
algorithms. Our experimental results show a significant performance boost for
these algo- rithms as compared to their non-incremental counterparts. These
results suggest that incremental cardinality constraints could be beneficial
for other constraint solving domains.Comment: 18 pages, 4 figures, 1 table. Final version published in Principles
and Practice of Constraint Programming (CP) 201
- …