685 research outputs found
Lex-Partitioning: A New Option for BDD Search
For the exploration of large state spaces, symbolic search using binary
decision diagrams (BDDs) can save huge amounts of memory and computation time.
State sets are represented and modified by accessing and manipulating their
characteristic functions. BDD partitioning is used to compute the image as the
disjunction of smaller subimages.
In this paper, we propose a novel BDD partitioning option. The partitioning
is lexicographical in the binary representation of the states contained in the
set that is represented by a BDD and uniform with respect to the number of
states represented. The motivation of controlling the state set sizes in the
partitioning is to eventually bridge the gap between explicit and symbolic
search.
Let n be the size of the binary state vector. We propose an O(n) ranking and
unranking scheme that supports negated edges and operates on top of precomputed
satcount values. For the uniform split of a BDD, we then use unranking to
provide paths along which we partition the BDDs. In a shared BDD representation
the efforts are O(n). The algorithms are fully integrated in the CUDD library
and evaluated in strongly solving general game playing benchmarks.Comment: In Proceedings GRAPHITE 2012, arXiv:1210.611
Recommended from our members
Hashing and its applications
This tutorial discusses one of the oldest problems in computing: how to search and retrieve keyed information from a list in the least amount of time. Hashing -- a technique that mathematically converts a key into a storage address -- is one of the best methods of finding and retrieving information associated with a unique identifying key. We briefly survey techniques which have evolved over the past 25 years and then introduce more recent research results for extremely compact and fast methods based on perfect and minimal perfect hashing. Perfect and minimal perfect hashing is useful for rapid lookup of keywords in a compiler, spelling checkers, and database management systems. The results presented here show techniques for constructing long lists which can be searched in one memory reference.KEYWORDS AND PHRASES
Key-to-address transformation, hash coding, hash table, scatter table, associative retrieval, associative memory, bucket hashing, perfect hashing, minimal perfect hashing, lookup, indexed retrieval
Recommended from our members
Internal hashing for dynamic and static tables
This tutorial discusses one of the oldest problems in computing: how to search and retrieve keyed information from a list in the least amount of time. Hashing - a technique that mathematically converts a key into a storage address - is one of the best methods of finding and retrieving information associated with a unique identifying key. We briefly survey techniques which have evolved over the past 25 years and then introduce more recent research results for extremely compact and fast methods based on perfect and minimal perfect hashing. Perfect and minimal perfect hashing is useful for rapid lookup in a static table such as keywords in a compiler, spelling checkers, and database management systems. The results presented here show techniques for constructing long lists which can be searched in one memory reference.KEYWORDS AND PHRASES: Key-to-address transformation, hash coding, hash table, scatter table, bucket hashing, perfect hashing, minimal perfect hashin
FLECS: Planning with a Flexible Commitment Strategy
There has been evidence that least-commitment planners can efficiently handle
planning problems that involve difficult goal interactions. This evidence has
led to the common belief that delayed-commitment is the "best" possible
planning strategy. However, we recently found evidence that eager-commitment
planners can handle a variety of planning problems more efficiently, in
particular those with difficult operator choices. Resigned to the futility of
trying to find a universally successful planning strategy, we devised a planner
that can be used to study which domains and problems are best for which
planning strategies. In this article we introduce this new planning algorithm,
FLECS, which uses a FLExible Commitment Strategy with respect to plan-step
orderings. It is able to use any strategy from delayed-commitment to
eager-commitment. The combination of delayed and eager operator-ordering
commitments allows FLECS to take advantage of the benefits of explicitly using
a simulated execution state and reasoning about planning constraints. FLECS can
vary its commitment strategy across different problems and domains, and also
during the course of a single planning problem. FLECS represents a novel
contribution to planning in that it explicitly provides the choice of which
commitment strategy to use while planning. FLECS provides a framework to
investigate the mapping from planning domains and problems to efficient
planning strategies.Comment: See http://www.jair.org/ for an online appendix and other files
accompanying this articl
Commitment Schemes from Supersingular Elliptic Curve Isogeny Graphs
In this work we present two commitment schemes based on hardness assumptions arising from supersingular elliptic curve isogeny graphs, which possess strong security properties. The first is based on the CGL hash function while the second is based on the SIDH framework, both of which require a trusted third party for the setup phrase. The proofs of security of these protocols depend on properties of non-backtracking random walks on regular graphs. The optimal efficiency of these protocols depends on the size of a certain constant, defined in the paper, related to relevant isogeny graphs, which we give conjectural upper bounds for
Recommended from our members
Minimal perfect hash tables environments
Cichelli gave a simple machine independent minimal perfect hashing function for small static sets. The hash function value for a word is computed as the sum of the length of the word and the values associated with the first and last letters of the word. Cichelli's algorithm (word-oriented) to find the letter value assignments considered the words one at a time. Cook and Oldehoeft developed a letter-oriented algorithm that considered groups of words in finding the letter value assignments. It outperformed Cichelli's algorithm.
In this paper we present some characteristics of sets of words that cause problems for the letter-oriented algorithm. The major result is a new algorithm that finds separate letter value assignments for the first and last letter sets. It outperformed the letter-oriented algorithm for all data sets. Because it is faster and less restrictive, the new algorithm facilitates partitioning large data sets into smaller sets
A Survey of Binary Covering Arrays
Binary covering arrays of strength t are 0–1 matrices having the property that for each t columns and each of the possible 2[superscript t] sequences of t 0's and 1's, there exists a row having that sequence in that set of t columns. Covering arrays are an important tool in certain applications, for example, in software testing. In these applications, the number of columns of the matrix is dictated by the application, and it is desirable to have a covering array with a small number of rows. Here we survey some of what is known about the existence of binary covering arrays and methods of producing them, including both explicit constructions and search techniques
- …