2 research outputs found

    Accelerating board games through Hardware/Software Codesign

    Get PDF
    Board games applications usually offer a great user experience when running on desktop computers. Powerful high-performance processors working without energy restrictions successfully deal with the exploration of large game trees, delivering strong play to satisfy demanding users. However, nowadays, more and more game players are running these games on smartphones and tablets, where the lower computational power and limited power budget yield a much weaker play. Recent systems-on-a-chip include programmable logic tightly coupled with general-purpose processors enabling the inclusion of custom accelerators for any application to improve both performance and energy efficiency. In this paper, we analyze the benefits of partitioning the artificial intelligence of board games into software and hardware. We have chosen as case studies three popular and complex board games, Reversi, Blokus, and Connect6. The designs analyzed include hardware accelerators for board processing, which improve performance and energy efficiency by an order of magnitude leading to much stronger and battery-aware applications. The results demonstrate that the use of hardware/software codesign to develop board games allows sustaining or even improving the user experience across platforms while keeping power and energy low

    Deep df-pn and Its Efficient Implementations

    Get PDF
    Depth-first proof-number search (df-pn) is a powerful variant of proof-number search algorithms, widely used for AND/OR tree search or solving games. However, df-pn suffers from the seesaw effect, which strongly hampers the efficiency in some situations. This paper proposes a new proof number algorithm called Deep depth-first proof-number search (Deep df-pn) to reduce the seesaw effect in df-pn. The difference between Deep df-pn and df-pn lies in the proof number or disproof number of unsolved nodes. It is 1 in df-pn, while it is a function of depth with two parameters in Deep df-pn. By adjusting the value of the parameters, Deep df-pn changes its behavior from searching broadly to searching deeply. The paper shows that the adjustment is able to reduce the seesaw effect convincingly. For evaluating the performance of Deep df-pn in the domain of Connect6, we implemented a relevance-zone-oriented Deep df-pn that worked quite efficiently. The experimental results indicate that improving efficiency by the same adjustment technique is also possible in other domains.15th International Conferences, ACG 2017, Leiden, The Netherlands, July 3–5, 2017, Revised Selected Paper
    corecore