1,304 research outputs found

    Neural Categorical Priors for Physics-Based Character Control

    Full text link
    Recent advances in learning reusable motion priors have demonstrated their effectiveness in generating naturalistic behaviors. In this paper, we propose a new learning framework in this paradigm for controlling physics-based characters with significantly improved motion quality and diversity over existing state-of-the-art methods. The proposed method uses reinforcement learning (RL) to initially track and imitate life-like movements from unstructured motion clips using the discrete information bottleneck, as adopted in the Vector Quantized Variational AutoEncoder (VQ-VAE). This structure compresses the most relevant information from the motion clips into a compact yet informative latent space, i.e., a discrete space over vector quantized codes. By sampling codes in the space from a trained categorical prior distribution, high-quality life-like behaviors can be generated, similar to the usage of VQ-VAE in computer vision. Although this prior distribution can be trained with the supervision of the encoder's output, it follows the original motion clip distribution in the dataset and could lead to imbalanced behaviors in our setting. To address the issue, we further propose a technique named prior shifting to adjust the prior distribution using curiosity-driven RL. The outcome distribution is demonstrated to offer sufficient behavioral diversity and significantly facilitates upper-level policy learning for downstream tasks. We conduct comprehensive experiments using humanoid characters on two challenging downstream tasks, sword-shield striking and two-player boxing game. Our results demonstrate that the proposed framework is capable of controlling the character to perform considerably high-quality movements in terms of behavioral strategies, diversity, and realism. Videos, codes, and data are available at https://tencent-roboticsx.github.io/NCP/

    Multiplayer Game Development Approaches for Student Integration in Universities

    Get PDF
    Tese de Mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 201

    Multi-agent reinforcement learning for character control

    Get PDF

    Deus Ex Machinima: A Rhetorical Analysis of User-Generated Machinima

    Get PDF
    Beginning with corporate demonstrations and continuously evolving into today, machinima has become a major expressive art form for the gamer generation. Machinima is the user-centered production of video presentations using pre-rendered animated content, as generated from video games. The term \u27machinima\u27 is a combination of \u27machine\u27 (from which the video content is derived) and \u27cinema\u27 (the ultimate end product). According to Paul Marino and other members of the machinima community, Hugh Hancock, the creator of Machinima.com, first coined the term in 2000. Video productions of this kind have been used in various capacities for the past several years, including instruction or marketing, as well as rapid prototyping of large-scale cinema projects (Marino). In this thesis, I will briefly outline the current research on machinima. I will then build a methodology for my own rhetorical analysis of machinima as they formulate the promotion of their arguments. This methodology will include examples from major rhetorical theorists, including Lloyd Bitzer, Kenneth Burke, and Gunther Kress and Theo VanLeeuwan, among others. I will then apply my analytical tools to modern user-generated machinima from a variety of sources as a series of case studies. These cases include non-profit and for-profit examples, as well as educational and entertainment examples. Finally, I will explain how this framework may be used as a guideline for rhetorically sound and effective machinima

    Artificial Intelligence in the Creative Industries: A Review

    Full text link
    This paper reviews the current state of the art in Artificial Intelligence (AI) technologies and applications in the context of the creative industries. A brief background of AI, and specifically Machine Learning (ML) algorithms, is provided including Convolutional Neural Network (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs) and Deep Reinforcement Learning (DRL). We categorise creative applications into five groups related to how AI technologies are used: i) content creation, ii) information analysis, iii) content enhancement and post production workflows, iv) information extraction and enhancement, and v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, machine learning-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of machine learning in domains with fewer constraints, where AI is the `creator', remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human centric -- where it is designed to augment, rather than replace, human creativity

    ACE: Adversarial Correspondence Embedding for Cross Morphology Motion Retargeting from Human to Nonhuman Characters

    Full text link
    Motion retargeting is a promising approach for generating natural and compelling animations for nonhuman characters. However, it is challenging to translate human movements into semantically equivalent motions for target characters with different morphologies due to the ambiguous nature of the problem. This work presents a novel learning-based motion retargeting framework, Adversarial Correspondence Embedding (ACE), to retarget human motions onto target characters with different body dimensions and structures. Our framework is designed to produce natural and feasible robot motions by leveraging generative-adversarial networks (GANs) while preserving high-level motion semantics by introducing an additional feature loss. In addition, we pretrain a robot motion prior that can be controlled in a latent embedding space and seek to establish a compact correspondence. We demonstrate that the proposed framework can produce retargeted motions for three different characters -- a quadrupedal robot with a manipulator, a crab character, and a wheeled manipulator. We further validate the design choices of our framework by conducting baseline comparisons and a user study. We also showcase sim-to-real transfer of the retargeted motions by transferring them to a real Spot robot
    • …
    corecore