1,956 research outputs found
Hierarchical Planning and Control for Box Loco-Manipulation
Humans perform everyday tasks using a combination of locomotion and
manipulation skills. Building a system that can handle both skills is essential
to creating virtual humans. We present a physically-simulated human capable of
solving box rearrangement tasks, which requires a combination of both skills.
We propose a hierarchical control architecture, where each level solves the
task at a different level of abstraction, and the result is a physics-based
simulated virtual human capable of rearranging boxes in a cluttered
environment. The control architecture integrates a planner, diffusion models,
and physics-based motion imitation of sparse motion clips using deep
reinforcement learning. Boxes can vary in size, weight, shape, and placement
height. Code and trained control policies are provided
Toward a computational theory for motion understanding: The expert animators model
Artificial intelligence researchers claim to understand some aspect of human intelligence when their model is able to emulate it. In the context of computer graphics, the ability to go from motion representation to convincing animation should accordingly be treated not simply as a trick for computer graphics programmers but as important epistemological and methodological goal. In this paper we investigate a unifying model for animating a group of articulated bodies such as humans and robots in a three-dimensional environment. The proposed model is considered in the framework of knowledge representation and processing, with special reference to motion knowledge. The model is meant to help setting the basis for a computational theory for motion understanding applied to articulated bodies
Programmatic and Direct Manipulation, Together at Last
Direct manipulation interfaces and programmatic systems have distinct and
complementary strengths. The former provide intuitive, immediate visual
feedback and enable rapid prototyping, whereas the latter enable complex,
reusable abstractions. Unfortunately, existing systems typically force users
into just one of these two interaction modes.
We present a system called Sketch-n-Sketch that integrates programmatic and
direct manipulation for the particular domain of Scalable Vector Graphics
(SVG). In Sketch-n-Sketch, the user writes a program to generate an output SVG
canvas. Then the user may directly manipulate the canvas while the system
immediately infers a program update in order to match the changes to the
output, a workflow we call live synchronization. To achieve this, we propose
(i) a technique called trace-based program synthesis that takes program
execution history into account in order to constrain the search space and (ii)
heuristics for dealing with ambiguities. Based on our experience with examples
spanning 2,000 lines of code and from the results of a preliminary user study,
we believe that Sketch-n-Sketch provides a novel workflow that can augment
traditional programming systems. Our approach may serve as the basis for live
synchronization in other application domains, as well as a starting point for
yet more ambitious ways of combining programmatic and direct manipulation.Comment: PLDI 2016 Paper + Supplementary Appendice
Emotion Transfer for Hand Animation
We propose a new data-driven framework for synthesizing hand motion at different emotion levels. Specifically, we first capture high-quality hand motion using VR gloves. The hand motion data is then annotated with the emotion type and a latent space is constructed from the motions to facilitate the motion synthesis process. By interpolating the latent representation of the hand motion, new hand animation with different levels of emotion strength can be generated. Experimental results show that our framework can produce smooth and consistent hand motions at an interactive rate
- …