4 research outputs found

    Modelling Human-like Behavior through Reward-based Approach in a First-Person Shooter Game

    Get PDF
    We present two examples of how human-like behavior can be implemented in a model of computer player to improve its characteristics and decision-making patterns in video game. At first, we describe a reinforcement learning model, which helps to choose the best weapon depending on reward values obtained from shooting combat situations.Secondly, we consider an obstacle avoiding path planning adapted to the tactical visibility measure. We describe an implementation of a smoothing path model, which allows the use of penalties (negative rewards) for walking through \bad" tactical positions. We also study algorithms of path nding such as improved I-ARA* search algorithm for dynamic graph by copying human discrete decision-making model of reconsidering goals similar to Page-Rank algorithm. All the approaches demonstrate how human behavior can be modeled in applications with significant perception of intellectual agent actions

    A new approach for continual planning

    Get PDF
    International audienceDevising intelligent robots or agents that interact with humans is a major challenge for artificial intelligence. In such contexts, agents must constantly adapt their decisions according to human activities and modify their goal. In this extended abstract, we present a novel continual planning approach, called Moving Goal Planning (MGP) to adapt plans to goal evolutions. This approach draws inspiration from Moving Target Search (MTS) algorithms. In order to limit the number of search iterations and to improve its efficiency, MGP delays as much as possible the start of new searches when the goal changes over time. To this purpose, MGP uses two strategies: Open Check (OC) that checks if the new goal is still in the current search tree and Plan Follow (PF) that estimates whether executing actions of the current plan brings MGP closer to the new goal

    Incremental ara*: An incremental anytime search algorithm for moving-target search

    No full text
    Abstract Moving-target search, where a hunter has to catch a moving target, is an important problem for video game developers. In our case, the hunter repeatedly moves towards the target and thus has to solve similar search problems repeatedly

    Probabilistic motion planning and optimization incorporating chance constraints

    Get PDF
    Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 201-208).For high-dimensional robots, motion planning is still a challenging problem, especially for manipulators mounted to underwater vehicles or human support robots where uncertainties and risks of plan failure can have severe impact. However, existing risk-aware planners mostly focus on low-dimensional planning tasks, meanwhile planners that can account for uncertainties and react fast in high degree-of-freedom (DOF) robot planning tasks are lacking. In this thesis, a risk-aware motion planning and execution system called Probabilistic Chekov (p-Chekov) is introduced, which includes a deterministic stage and a risk-aware stage. A systematic set of experiments on existing motion planners as well as p-Chekov is also presented. The deterministic stage of p-Chekov leverages the recent advances in obstacle-aware trajectory optimization to improve the original tube-based-roadmap Chekov planner. Through experiments in 4 common application scenarios with 5000 test cases each, we show that using sampling-based planners alone on high DOF robots can not achieve a high enough reaction speed, whereas the popular trajectory optimizer TrajOpt with naive straight-line seed trajectories has very high collision rate despite its high planning speed. To the best of our knowledge, this is the first work that presents such a systematic and comprehensive evaluation of state-of-the-art motion planners, which are based on a significant amount of experiments. We then combine different stand-alone planners with trajectory optimization. The results show that the deterministic planning part of p-Chekov, which combines a roadmap approach that caches the all pair shortest paths solutions and an online obstacle-aware trajectory optimizer, provides superior performance over other standard sampling-based planners' combinations. Simulation results show that, in typical real-life applications, this "roadmap + TrajOpt" approach takes about 1 s to plan and the failure rate of its solutions is under 1%. The risk-aware stage of p-Chekov accounts for chance constraints through state probability distribution and collision probability estimation. Based on the deterministic Chekov planner, p-Chekov incorporates a linear-quadratic Gaussian motion planning (LQG-MP) approach into robot state probability distribution estimation, applies quadrature-sampling theories to collision risk estimation, and adapts risk allocation approaches for chance constraint satisfaction. It overcomes existing risk-aware planners' limitation in real-time motion planning tasks with high-DOF robots in 3- dimensional non-convex environments. The experimental results in this thesis show that this new risk-aware motion planning and execution system can effectively reduce collision risk and satisfy chance constraints in typical real-world planning scenarios for high-DOF robots. This thesis makes the following three main contributions: (1) a systematic evaluation of several state-of-the-art motion planners in realistic planning scenarios, including popular sampling-based motion planners and trajectory optimization type motion planners, (2) the establishment of a "roadmap + TrajOpt" deterministic motion planning system that shows superior performance in many practical planning tasks in terms of solution feasibility, optimality and reaction time, and (3) the development of a risk-aware motion planning and execution system that can handle high-DOF robotic planning tasks in 3-dimensional non-convex environments.by Siyu Dai.S.M
    corecore