581 research outputs found

    On the Power of Manifold Samples in Exploring Configuration Spaces and the Dimensionality of Narrow Passages

    Full text link
    We extend our study of Motion Planning via Manifold Samples (MMS), a general algorithmic framework that combines geometric methods for the exact and complete analysis of low-dimensional configuration spaces with sampling-based approaches that are appropriate for higher dimensions. The framework explores the configuration space by taking samples that are entire low-dimensional manifolds of the configuration space capturing its connectivity much better than isolated point samples. The contributions of this paper are as follows: (i) We present a recursive application of MMS in a six-dimensional configuration space, enabling the coordination of two polygonal robots translating and rotating amidst polygonal obstacles. In the adduced experiments for the more demanding test cases MMS clearly outperforms PRM, with over 20-fold speedup in a coordination-tight setting. (ii) A probabilistic completeness proof for the most prevalent case, namely MMS with samples that are affine subspaces. (iii) A closer examination of the test cases reveals that MMS has, in comparison to standard sampling-based algorithms, a significant advantage in scenarios containing high-dimensional narrow passages. This provokes a novel characterization of narrow passages which attempts to capture their dimensionality, an attribute that had been (to a large extent) unattended in previous definitions.Comment: 20 page

    Fast Approximate Max-n Monte Carlo Tree Search for Ms Pac-Man

    Get PDF
    We present an application of Monte Carlo tree search (MCTS) for the game of Ms Pac-Man. Contrary to most applications of MCTS to date, Ms Pac-Man requires almost real-time decision making and does not have a natural end state. We approached the problem by performing Monte Carlo tree searches on a five player maxn tree representation of the game with limited tree search depth. We performed a number of experiments using both the MCTS game agents (for pacman and ghosts) and agents used in previous work (for ghosts). Performance-wise, our approach gets excellent scores, outperforming previous non-MCTS opponent approaches to the game by up to two orders of magnitude. © 2011 IEEE

    LERC: Coordinated Cache Management for Data-Parallel Systems

    Full text link
    Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy

    New hardware support transactional memory and parallel debugging in multicore processors

    Get PDF
    This thesis contributes to the area of hardware support for parallel programming by introducing new hardware elements in multicore processors, with the aim of improving the performance and optimize new tools, abstractions and applications related with parallel programming, such as transactional memory and data race detectors. Specifically, we configure a hardware transactional memory system with signatures as part of the hardware support, and we develop a new hardware filter for reducing the signature size. We also develop the first hardware asymmetric data race detector (which is also able to tolerate them), based also in hardware signatures. Finally, we propose a new module of hardware signatures that solves some of the problems that we found in the previous tools related with the lack of flexibility in hardware signatures

    Affective Pacman: A Frustrating Game for Brain-Computer Interface Experiments

    Get PDF

    AI Education Matters: Teaching Hidden Markov Models

    Full text link
    In this column, we share resources for learning about and teaching Hidden Markov Models (HMMs). HMMs find many important applications in temporal pattern recognition tasks such as speech/handwriting/gesture recognition and robot localization. In such domains, we may have a finite state machine model with known state transition probabilities, state output probabilities, and state outputs, but lack knowledge of the states generating such outputs. HMMs are useful in framing problems where external sequential evidence is used to derive underlying state information (e.g. intended words and gestures). [excerpt
    corecore