3,189 research outputs found
Learning recurrent representations for hierarchical behavior modeling
We propose a framework for detecting action patterns from motion sequences
and modeling the sensory-motor relationship of animals, using a generative
recurrent neural network. The network has a discriminative part (classifying
actions) and a generative part (predicting motion), whose recurrent cells are
laterally connected, allowing higher levels of the network to represent high
level phenomena. We test our framework on two types of data, fruit fly behavior
and online handwriting. Our results show that 1) taking advantage of unlabeled
sequences, by predicting future motion, significantly improves action detection
performance when training labels are scarce, 2) the network learns to represent
high level phenomena such as writer identity and fly gender, without
supervision, and 3) simulated motion trajectories, generated by treating motion
prediction as input to the network, look realistic and may be used to
qualitatively evaluate whether the model has learnt generative control rules
Robust Motion In-betweening
In this work we present a novel, robust transition generation technique that
can serve as a new tool for 3D animators, based on adversarial recurrent neural
networks. The system synthesizes high-quality motions that use
temporally-sparse keyframes as animation constraints. This is reminiscent of
the job of in-betweening in traditional animation pipelines, in which an
animator draws motion frames between provided keyframes. We first show that a
state-of-the-art motion prediction model cannot be easily converted into a
robust transition generator when only adding conditioning information about
future keyframes. To solve this problem, we then propose two novel additive
embedding modifiers that are applied at each timestep to latent representations
encoded inside the network's architecture. One modifier is a time-to-arrival
embedding that allows variations of the transition length with a single model.
The other is a scheduled target noise vector that allows the system to be
robust to target distortions and to sample different transitions given fixed
keyframes. To qualitatively evaluate our method, we present a custom
MotionBuilder plugin that uses our trained model to perform in-betweening in
production scenarios. To quantitatively evaluate performance on transitions and
generalizations to longer time horizons, we present well-defined in-betweening
benchmarks on a subset of the widely used Human3.6M dataset and on LaFAN1, a
novel high quality motion capture dataset that is more appropriate for
transition generation. We are releasing this new dataset along with this work,
with accompanying code for reproducing our baseline results.Comment: Published at SIGGRAPH 202
AdaptNet: Policy Adaptation for Physics-Based Character Control
Motivated by humans' ability to adapt skills in the learning of new ones,
this paper presents AdaptNet, an approach for modifying the latent space of
existing policies to allow new behaviors to be quickly learned from like tasks
in comparison to learning from scratch. Building on top of a given
reinforcement learning controller, AdaptNet uses a two-tier hierarchy that
augments the original state embedding to support modest changes in a behavior
and further modifies the policy network layers to make more substantive
changes. The technique is shown to be effective for adapting existing
physics-based controllers to a wide range of new styles for locomotion, new
task targets, changes in character morphology and extensive changes in
environment. Furthermore, it exhibits significant increase in learning
efficiency, as indicated by greatly reduced training times when compared to
training from scratch or using other approaches that modify existing policies.
Code is available at https://motion-lab.github.io/AdaptNet.Comment: SIGGRAPH Asia 2023. Video: https://youtu.be/WxmJSCNFb28. Website:
https://motion-lab.github.io/AdaptNet, https://pei-xu.github.io/AdaptNe
- …