6,214 research outputs found
Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models
Diffusion-based generative models have recently emerged as powerful solutions
for high-quality synthesis in multiple domains. Leveraging the bidirectional
Markov chains, diffusion probabilistic models generate samples by inferring the
reversed Markov chain based on the learned distribution mapping at the forward
diffusion process. In this work, we propose Modiff, a conditional paradigm that
benefits from the denoising diffusion probabilistic model (DDPM) to tackle the
problem of realistic and diverse action-conditioned 3D skeleton-based motion
generation. We are a pioneering attempt that uses DDPM to synthesize a variable
number of motion sequences conditioned on a categorical action. We evaluate our
approach on the large-scale NTU RGB+D dataset and show improvements over
state-of-the-art motion generation methods
AnimGAN: A Spatiotemporally-Conditioned Generative Adversarial Network for Character Animation
Producing realistic character animations is one of the essential tasks in
human-AI interactions. Considered as a sequence of poses of a humanoid, the
task can be considered as a sequence generation problem with spatiotemporal
smoothness and realism constraints. Additionally, we wish to control the
behavior of AI agents by giving them what to do and, more specifically, how to
do it. We proposed a spatiotemporally-conditioned GAN that generates a sequence
that is similar to a given sequence in terms of semantics and spatiotemporal
dynamics. Using LSTM-based generator and graph ConvNet discriminator, this
system is trained end-to-end on a large gathered dataset of gestures,
expressions, and actions. Experiments showed that compared to traditional
conditional GAN, our method creates plausible, realistic, and semantically
relevant humanoid animation sequences that match user expectations.Comment: Submitted to ICIP 202
- …