1,949 research outputs found

    Generations of Game Analytics, Achievements and High Scores

    Get PDF
    This paper poses the question, how has game recording evolved over the generations and how will it affect future generations of players and game developers who have access to the digital past? High scores have a history behind them and as generations of games have moved forward they begin to record much more than just scores. The present player generation can record complete replays of their entire gameplay performances and even play against other recorded player ghosts. As the future generations of gamers and developers take over they will have unprecedented access to the digital history archive as it becomes easier and easier to record and store the past. Deciding what to do with that past will be the next generation’s task as games move into the future

    Generations of Game Analytics, Achievements and High Scores

    Get PDF
    This paper poses the question, how has game recording evolved over the generations and how will it affect future generations of players and game developers who have access to the digital past? High scores have a history behind them and as generations of games have moved forward they begin to record much more than just scores. The present player generation can record complete replays of their entire gameplay performances and even play against other recorded player ghosts. As the future generations of gamers and developers take over they will have unprecedented access to the digital history archive as it becomes easier and easier to record and store the past. Deciding what to do with that past will be the next generation’s task as games move into the future

    Feature Representation for Online Signature Verification

    Full text link
    Biometrics systems have been used in a wide range of applications and have improved people authentication. Signature verification is one of the most common biometric methods with techniques that employ various specifications of a signature. Recently, deep learning has achieved great success in many fields, such as image, sounds and text processing. In this paper, deep learning method has been used for feature extraction and feature selection.Comment: 10 pages, 10 figures, Submitted to IEEE Transactions on Information Forensics and Securit

    Evolving Game Skill-Depth using General Video Game AI agents

    Get PDF
    Most games have, or can be generalised to have, a number of parameters that may be varied in order to provide instances of games that lead to very different player experiences. The space of possible parameter settings can be seen as a search space, and we can therefore use a Random Mutation Hill Climbing algorithm or other search methods to find the parameter settings that induce the best games. One of the hardest parts of this approach is defining a suitable fitness function. In this paper we explore the possibility of using one of a growing set of General Video Game AI agents to perform automatic play-testing. This enables a very general approach to game evaluation based on estimating the skill-depth of a game. Agent-based play-testing is computationally expensive, so we compare two simple but efficient optimisation algorithms: the Random Mutation Hill-Climber and the Multi-Armed Bandit Random Mutation Hill-Climber. For the test game we use a space-battle game in order to provide a suitable balance between simulation speed and potential skill-depth. Results show that both algorithms are able to rapidly evolve game versions with significant skill-depth, but that choosing a suitable resampling number is essential in order to combat the effects of noise

    On Efficient Reinforcement Learning for Full-length Game of StarCraft II

    Full text link
    StarCraft II (SC2) poses a grand challenge for reinforcement learning (RL), of which the main difficulties include huge state space, varying action space, and a long time horizon. In this work, we investigate a set of RL techniques for the full-length game of StarCraft II. We investigate a hierarchical RL approach involving extracted macro-actions and a hierarchical architecture of neural networks. We investigate a curriculum transfer training procedure and train the agent on a single machine with 4 GPUs and 48 CPU threads. On a 64x64 map and using restrictive units, we achieve a win rate of 99% against the level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat models, we achieve a 93% win rate against the most difficult non-cheating level built-in AI (level-7). In this extended version of the paper, we improve our architecture to train the agent against the cheating level AIs and achieve the win rate against the level-8, level-9, and level-10 AIs as 96%, 97%, and 94%, respectively. Our codes are at https://github.com/liuruoze/HierNet-SC2. To provide a baseline referring the AlphaStar for our work as well as the research and open-source community, we reproduce a scaled-down version of it, mini-AlphaStar (mAS). The latest version of mAS is 1.07, which can be trained on the raw action space which has 564 actions. It is designed to run training on a single common machine, by making the hyper-parameters adjustable. We then compare our work with mAS using the same resources and show that our method is more effective. The codes of mini-AlphaStar are at https://github.com/liuruoze/mini-AlphaStar. We hope our study could shed some light on the future research of efficient reinforcement learning on SC2 and other large-scale games.Comment: 48 pages,21 figure
    • …
    corecore