2,891 research outputs found

    Implementing Adaptive Game Difficulty Balancing in Serious Games

    Get PDF

    Emerging Artificial Societies Through Learning

    Get PDF
    The NewTies project is implementing a simulation in which societies of agents are expected to de-velop autonomously as a result of individual, population and social learning. These societies are expected to be able to solve environmental challenges by acting collectively. The challenges are in-tended to be analogous to those faced by early, simple, small-scale human societies. This report on work in progress outlines the major features of the system as it is currently conceived within the project, including the design of the agents, the environment, the mechanism for the evolution of language and the peer-to-peer infrastructure on which the simulation runs.Artificial Societies, Evolution of Language, Decision Trees, Peer-To-Peer Networks, Social Learning

    "It's Unwieldy and It Takes a Lot of Time." Challenges and Opportunities for Creating Agents in Commercial Games

    Full text link
    Game agents such as opponents, non-player characters, and teammates are central to player experiences in many modern games. As the landscape of AI techniques used in the games industry evolves to adopt machine learning (ML) more widely, it is vital that the research community learn from the best practices cultivated within the industry over decades creating agents. However, although commercial game agent creation pipelines are more mature than those based on ML, opportunities for improvement still abound. As a foundation for shared progress identifying research opportunities between researchers and practitioners, we interviewed seventeen game agent creators from AAA studios, indie studios, and industrial research labs about the challenges they experienced with their professional workflows. Our study revealed several open challenges ranging from design to implementation and evaluation. We compare with literature from the research community that address the challenges identified and conclude by highlighting promising directions for future research supporting agent creation in the games industry.Comment: 7 pages, 3 figures, to be published in the 16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-20

    Do Artificial Reinforcement-Learning Agents Matter Morally?

    Full text link
    Artificial reinforcement learning (RL) is a widely used technique in artificial intelligence that provides a general method for training agents to perform a wide variety of behaviours. RL as used in computer science has striking parallels to reward and punishment learning in animal and human brains. I argue that present-day artificial RL agents have a very small but nonzero degree of ethical importance. This is particularly plausible for views according to which sentience comes in degrees based on the abilities and complexities of minds, but even binary views on consciousness should assign nonzero probability to RL programs having morally relevant experiences. While RL programs are not a top ethical priority today, they may become more significant in the coming decades as RL is increasingly applied to industry, robotics, video games, and other areas. I encourage scientists, philosophers, and citizens to begin a conversation about our ethical duties to reduce the harm that we inflict on powerless, voiceless RL agents.Comment: 37 page

    General self-motivation and strategy identification : Case studies based on Sokoban and Pac-Man

    Get PDF
    (c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In this paper, we use empowerment, a recently introduced biologically inspired measure, to allow an AI player to assign utility values to potential future states within a previously unencountered game without requiring explicit specification of goal states. We further introduce strategic affinity, a method of grouping action sequences together to form "strategies," by examining the overlap in the sets of potential future states following each such action sequence. We also demonstrate an information-theoretic method of predicting future utility. Combining these methods, we extend empowerment to soft-horizon empowerment which enables the player to select a repertoire of action sequences that aim to maintain anticipated utility. We show how this method provides a proto-heuristic for nonterminal states prior to specifying concrete game goals, and propose it as a principled candidate model for "intuitive" strategy selection, in line with other recent work on "self-motivated agent behavior." We demonstrate that the technique, despite being generically defined independently of scenario, performs quite well in relatively disparate scenarios, such as a Sokoban-inspired box-pushing scenario and in a Pac-Man-inspired predator game, suggesting novel and principle-based candidate routes toward more general game-playing algorithms.Peer reviewedFinal Accepted Versio
    corecore