536 research outputs found
Taming Numbers and Durations in the Model Checking Integrated Planning System
The Model Checking Integrated Planning System (MIPS) is a temporal least
commitment heuristic search planner based on a flexible object-oriented
workbench architecture. Its design clearly separates explicit and symbolic
directed exploration algorithms from the set of on-line and off-line computed
estimates and associated data structures. MIPS has shown distinguished
performance in the last two international planning competitions. In the last
event the description language was extended from pure propositional planning to
include numerical state variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules. As a participant of the fully automated track of the competition,
MIPS has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This article
introduces and analyzes the most important algorithmic novelties that were
necessary to tackle the new layers of expressiveness in the benchmark problems
and to achieve a high level of performance. The extensions include critical
path analysis of sequentially generated plans to generate corresponding optimal
parallel plans. The linear time algorithm to compute the parallel plan bypasses
known NP hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations. The efficiency of
this algorithm also allows us to improve the exploration guidance: for each
encountered planning state the corresponding approximate sequential plan is
scheduled. One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that infers
knowledge to minimize the state description length, and that detects domain
object symmetries. The latter aspect is analyzed in detail. MIPS has been
developed to serve as a complete and optimal state space planner, with
admissible estimates, exploration engines and branching cuts. In the
competition version, however, certain performance compromises had to be made,
including floating point arithmetic, weighted heuristic search exploration
according to an inadmissible estimate and parameterized optimization
Using BDD-based decomposition for automatic error correction of combinatorial circuits
Boolean equivalence checking has turned out to be a powerful
method for verifying combinatorial circuits and has been widely
accepted both in academia and industry.
In this paper, we present a method for localizing and correcting
errors in combinatorial circuits for which equivalence checking
has failed. Our approach is general and does not assume any
error model. Working directly on BDDs, the approach is well
suited for integration into commonly used equivalence checkers.
Since circuits can be corrected fully automatically, our
approach can save considerable debugging time and therefore
will speed up the whole design cycle.
We have implemented a prototype verification tool and evaluated
our method with the Berkeley benchmark circuits. In addition, we
have applied it successfully to a real life example taken from
[DrFe96]
Exploiting the Temporal Logic Hierarchy and the Non-Confluence Property for Efficient LTL Synthesis
The classic approaches to synthesize a reactive system from a linear temporal
logic (LTL) specification first translate the given LTL formula to an
equivalent omega-automaton and then compute a winning strategy for the
corresponding omega-regular game. To this end, the obtained omega-automata have
to be (pseudo)-determinized where typically a variant of Safra's
determinization procedure is used. In this paper, we show that this
determinization step can be significantly improved for tool implementations by
replacing Safra's determinization by simpler determinization procedures. In
particular, we exploit (1) the temporal logic hierarchy that corresponds to the
well-known automata hierarchy consisting of safety, liveness, Buechi, and
co-Buechi automata as well as their boolean closures, (2) the non-confluence
property of omega-automata that result from certain translations of LTL
formulas, and (3) symbolic implementations of determinization procedures for
the Rabin-Scott and the Miyano-Hayashi breakpoint construction. In particular,
we present convincing experimental results that demonstrate the practical
applicability of our new synthesis procedure
A Test Vector Minimization Algorithm Based On Delta Debugging For Post-Silicon Validation Of Pcie Rootport
In silicon hardware design, such as designing PCIe devices, design verification is an essential part of the design process, whereby the devices are subjected to a series of tests that verify the functionality. However, manual debugging is still widely used in post-silicon validation and is a major bottleneck in the validation process. The reason is a large number of tests vectors have to be analyzed, and this slows process down. To solve the problem, a test vector minimizer algorithm is proposed to eliminate redundant test vectors that do not contribute to reproduction of a test failure, hence, improving the debug throughput. The proposed methodology is inspired by the Delta Debugging algorithm which is has been used in automated software debugging but not in post-silicon hardware debugging. The minimizer operates on the principle of binary partitioning of the test vectors, and iteratively testing each subset (or complement of set) on a post-silicon System-Under-Test (SUT), to identify and eliminate redundant test vectors. Test results using test vector sets containing deliberately introduced erroneous test vectors show that the minimizer is able to isolate the erroneous test vectors. In test cases containing up to 10,000 test vectors, the minimizer requires about 16ns per test vector in the test case when only one erroneous test vector is present. In a test case with 1000 vectors including erroneous vectors, the same minimizer requires about 140μs per erroneous test vector that is injected. Thus, the minimizer’s CPU consumption is significantly smaller than the typical amount of time of a test running on SUT. The factors that significantly impact the performance of the algorithm are number of erroneous test vectors and distribution (spacing) of the erroneous vectors. The effect of total number of test vectors and position of the erroneous vectors are relatively minor compared to the other two. The minimization algorithm therefore was most effective for cases where there are only a few erroneous test vectors, with large number of test vectors in the set
- …