19 research outputs found
Synthesis of variable dancing styles based on a compact spatiotemporal representation of dance
Dance as a complex expressive form of motion is able to convey emotion, meaning and social idiosyncrasies that opens channels for non-verbal communication, and promotes rich cross-modal interactions with music and the environment. As such, realistic dancing characters may incorporate crossmodal information and variability of the dance forms through compact representations that may describe the movement structure in terms of its spatial and temporal organization. In this paper, we propose a novel method for synthesizing beatsynchronous dancing motions based on a compact topological model of dance styles, previously captured with a motion capture system. The model was based on the Topological Gesture Analysis (TGA) which conveys a discrete three-dimensional point-cloud representation of the dance, by describing the spatiotemporal variability of its gestural trajectories into uniform spherical distributions, according to classes of the musical meter. The methodology for synthesizing the modeled dance traces back the topological representations, constrained with definable metrical and spatial parameters, into complete dance instances whose variability is controlled by stochastic processes that considers both TGA distributions and the kinematic constraints of the body morphology. In order to assess the relevance and flexibility of each parameter into feasibly reproducing the style of the captured dance, we correlated both captured and synthesized trajectories of samba dancing sequences in relation to the level of compression of the used model, and report on a subjective evaluation over a set of six tests. The achieved results validated our approach, suggesting that a periodic dancing style, and its musical synchrony, can be feasibly reproduced from a suitably parametrized discrete spatiotemporal representation of the gestural motion trajectories, with a notable degree of compression
Motion Editing And Reuse Techniques And Their Role In Studying Events Between A Machine And Its Operator
Motion capture involves recording the position and global orientation of joint sensors of a real object, in most cases a real person performing some human activities. This information is usually recorded at uniformly spaced instances of time, or as it is often called frame-by-frame. Then the recorded motion data sets are processed and mapped into a skeleton hierarchy of a virtual, computer simulated human figure to control the motion of the virtual human in the computer simulation. In the first part of the paper we review several new techniques developed to facilitate the manipulation, noise reduction, storage and reuse of captured data, which have a potential to reduce the overall cost of motion simulation and improve its realism. In the second part of the paper we consider the real life problem of reducing a worker\u2019s risk from being hit by underground mining machinery in a confined space. We formulate a set of requirements to motion editing for this particular task and analyze the limitation of existing techniques2002788
Populating 3D Scenes by Learning Human-Scene Interaction
Humans live within a 3D space and constantly interact with it to perform
tasks. Such interactions involve physical contact between surfaces that is
semantically meaningful. Our goal is to learn how humans interact with scenes
and leverage this to enable virtual characters to do the same. To that end, we
introduce a novel Human-Scene Interaction (HSI) model that encodes proximal
relationships, called POSA for "Pose with prOximitieS and contActs". The
representation of interaction is body-centric, which enables it to generalize
to new scenes. Specifically, POSA augments the SMPL-X parametric human body
model such that, for every mesh vertex, it encodes (a) the contact probability
with the scene surface and (b) the corresponding semantic scene label. We learn
POSA with a VAE conditioned on the SMPL-X vertices, and train on the PROX
dataset, which contains SMPL-X meshes of people interacting with 3D scenes, and
the corresponding scene semantics from the PROX-E dataset. We demonstrate the
value of POSA with two applications. First, we automatically place 3D scans of
people in scenes. We use a SMPL-X model fit to the scan as a proxy and then
find its most likely placement in 3D. POSA provides an effective representation
to search for "affordances" in the scene that match the likely contact
relationships for that pose. We perform a perceptual study that shows
significant improvement over the state of the art on this task. Second, we show
that POSA's learned representation of body-scene interaction supports monocular
human pose estimation that is consistent with a 3D scene, improving on the
state of the art. Our model and code are available for research purposes at
https://posa.is.tue.mpg.de
A Motion Control Scheme for Animating Expressive Arm Movements
Current methods for figure animation involve a tradeoff between the level of realism captured in the movements and the ease of generating the animations. We introduce a motion control paradigm that circumvents this tradeoff-it provides the ability to generate a wide range of natural-looking movements with minimal user labor.
Effort, which is one part of Rudolf Laban\u27s system for observing and analyzing movement, describes the qualitative aspects of movement. Our motion control paradigm simplifies the generation of expressive movements by proceduralizing these qualitative aspects to hide the non-intuitive, quantitative aspects of movement. We build a model of Effort using a set of kinematic movement parameters that defines how a figure moves between goal keypoints. Our motion control scheme provides control through Effort\u27s four dimensional system of textual descriptors, providing a level of control thus far missing from behavioral animation systems and offering novel specification and editing capabilities on top of traditional keyframing and inverse kinematics methods. Since our Effort model is inexpensive computationally, Effort-based motion control systems can work in real-time.
We demonstrate our motion control scheme by implementing EMOTE (Expressive MOTion Engine), a character animation module for expressive arm movements. EMOTE works with inverse kinematics to control the qualitative aspects of end-effector specified movements. The user specifies general movements by entering a sequence of goal positions for each hand. The user then expresses the essence of the movement by adjusting sliders for the Effort motion factors: Space, Weight, Time, and Flow. EMOTE produces a wide range of expressive movements, provides an easy-to-use interface (that is more intuitive than joint angle interpolation curves or physical parameters), features interactive editing, and real-time motion generation
Recommended from our members
Interactive human locomotion using motion graphs and mobility maps
Graph-based approaches for sequencing motion capture data have produced some of the most realistic and controllable character motion to date. Most previous graph-based approaches have employed a run-time global search to find paths through the motion graph that meet user-defined constraints such as a desired locomotion path. Such searches do not scale well to large numbers of characters. In this thesis, we describe a locomotion approach that benefits from the realism of graph-based approaches while maintaining basic user control and scaling well to large numbers of characters. Our approach is based on precomputing multiple least cost sequences from every state in a state-action graph. We store these precomputed sequences in a data structure called a mobility map and perform a local search of this map at run-time to generate motion sequences in real time that achieve user constraints in a natural manner. We demonstrate the quality of the motion through various example locomotion tasks including target
tracking and collision avoidance. We demonstrate scalability by animating crowds of up to a hundred and fifty rendered articulated walking characters at real-time rates
Sketch-based skeleton-driven 2D animation and motion capture.
This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of
cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect
The design and engineering of variable character morphology
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, September 2001."August 2001."Includes bibliographical references (p. 79-82).This thesis explores the technical challenges and the creative possibilities afforded by a computational system that allows behavioral control over the appearance of a character's morphology. Working within the framework of the Synthetic Character's behavior architecture, a system has been implemented that allows a character's internal state to drive changes in its morphology. The system allows for real-time, multi-target blending between body geometries, skeletons, and animations. The results reflect qualitative changes in the character's appearance and state. Through the thesis character sketches are used to demonstrate the potential of this integrated approach to behavior and character morphology.Scott Michael Eaton.S.M