327 research outputs found

    09491 Abstracts Collection -- Graph Search Engineering

    Get PDF
    From the 29th November to the 4th December 2009, the Dagstuhl Seminar 09491 ``Graph Search Engineering \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Best-first heuristic search for multicore machines

    Get PDF
    To harness modern multicore processors, it is imperative to develop parallel versions of fundamental algorithms. In this paper, we compare different approaches to parallel best-first search in a shared-memory setting. We present a new method, PBNF, that uses abstraction to partition the state space and to detect duplicate states without requiring frequent locking. PBNF allows speculative expansions when necessary to keep threads busy. We identify and fix potential livelock conditions in our approach, proving its correctness using temporal logic. Our approach is general, allowing it to extend easily to suboptimal and anytime heuristic search. In an empirical comparison on STRIPS planning, grid pathfinding, and sliding tile puzzle problems using 8-core machines, we show that A*, weighted A* and Anytime weighted A* implemented using PBNF yield faster search than improved versions of previous parallel search proposals

    On the Scalability of the GPUexplore Explicit-State Model Checker

    Get PDF
    The use of graphics processors (GPUs) is a promising approach to speed up model checking to such an extent that it becomes feasible to instantly verify software systems during development. GPUexplore is an explicit-state model checker that runs all its computations on the GPU. Over the years it has been extended with various techniques, and the possibilities to further improve its performance have been continuously investigated. In this paper, we discuss how the hash table of the tool works, which is at the heart of its functionality. We propose an alteration of the hash table that in isolated experiments seems promising, and analyse its effect when integrated in the tool. Furthermore, we investigate the current scalability of GPUexplore, by experimenting both with input models of varying sizes and running the tool on one of the latest GPUs of NVIDIA.Comment: In Proceedings GaM 2017, arXiv:1712.0834

    Hashing-based delayed duplicate detection as an approach to improve the scalability of optimal Bayesian network structure learning with external memory frontier breadth-first branch and bound search

    Get PDF
    Bayesian networks are graphical models used to represent the joint probability distribution for all variables in a data set. A Bayesian network can be constructed by an expert, but learning the network from the data is another option. This thesis is centered around exact score-based Bayesian network structure learning. The idea is to optimize a scoring function measuring the fit of the network to the data, and the solution represents an optimal network structure. The thesis adds to the earlier literature by extending an external memory frontier breadth-first branch and bound algorithm by Malone et al. [MYHB11], which searches the space of candidate networks using dynamic programming in a layered fashion. To detect duplicates during the candidate solution generation, the algorithm uses efficiently both semiconductor and magnetic disk memory. In-memory duplicate detection is performed using a hash table, while a delayed duplicate detection strategy is employed when resorting to the disk. Delayed duplicate detection is designed to work well against long disk latencies, because hash tables are currently still infeasible on disk. Delayed duplicate detection allows the algorithm to scale beyond search spaces of candidate solutions fitting only in the comparatively expensive and limited semiconductor memory at disposal. The sorting-based delayed duplicate detection strategy employed by the original algorithm has been found to be inferior to a hashing-based delayed duplicate strategy in other application domains [Kor08]. This thesis presents an approach to use hashing-based delayed duplicate detection in Bayesian network structure learning and compares it to the sorting-based method. The problem faced in the hashing of candidate solutions to disk is dividing the candidate solutions in a certain stage of the search in an efficient and scalable manner to files on the disk. The division presented in this thesis distributes the candidate solutions into files unevenly, but takes into account the maximum number of candidate solutions that can be held in the semiconductor memory. The method works in theory as long as operating system has free space and inodes to allocate for new files. Although the hashing-based method should in principle be faster than the sorting-based, the benchmarks presented in this thesis for the two methods show mixed results. However, the analysis presented also highlights the fact that by improving the efficiency of, for example, the distribution of hashed candidate solutions to different files, the hashing-based method could be further improved

    Survey on Directed Model Checking

    Get PDF
    International audienceThis article surveys and gives historical accounts to the algorithmic essentials of directed model checking, a promising bug-hunting technique to mitigate the state explosion problem. In the enumeration process, successor selection is prioritized. We discuss existing guidance and methods to automatically generate them by exploiting system abstractions. We extend the algorithms to feature partial-order reduction and show how liveness problems can be adapted by lifting the search Space. For deterministic, finite domains we instantiate the algorithms to directed symbolic, external and distributed search. For real-time domains we discuss the adaption of the algorithms to timed automata and for probabilistic domains we show the application to counterexample generation. Last but not least, we explain how directed model checking helps to accelerate finding solutions to scheduling problems

    Planning under time pressure

    Get PDF
    Heuristic search is a technique used pervasively in artificial intelligence and automated planning. Often an agent is given a task that it would like to solve as quickly as possible. It must allocate its time between planning the actions to achieve the task and actually executing them. We call this problem planning under time pressure. Most popular heuristic search algorithms are ill-suited for this setting, as they either search a lot to find short plans or search a little and find long plans. The thesis of this dissertation is: when under time pressure, an automated agent should explicitly attempt to minimize the sum of planning and execution times, not just one or just the other. This dissertation makes four contributions. First we present new algorithms that use modern multi-core CPUs to decrease planning time without increasing execution. Second, we introduce a new model for predicting the performance of iterative-deepening search. The model is as accurate as previous offline techniques when using less training data, but can also be used online to reduce the overhead of iterative-deepening search, resulting in faster planning. Third we show offline planning algorithms that directly attempt to minimize the sum of planning and execution times. And, fourth we consider algorithms that plan online in parallel with execution. Both offline and online algorithms account for a user-specified preference between search and execution, and can greatly outperform the standard utility-oblivious techniques. By addressing the problem of planning under time pressure, these contributions demonstrate that heuristic search is no longer restricted to optimizing solution cost, obviating the need to choose between slow search times and expensive solutions
    • …
    corecore