2,618 research outputs found

    Splicing of concurrent upper-body motion spaces with locomotion

    Get PDF
    In this paper, we present a motion splicing technique for generating concurrent upper-body actions occurring simultaneously with the evolution of a lower-body locomotion sequence. Specifically, we show that a layered interpolation motion model generates upper-body poses while assigning different actions to each upper-body part. Hence, in the proposed motion splicing approach, it is possible to increase the number of generated motions as well as the number of desired actions that can be performed by virtual characters. Additionally, we propose an iterative motion blending solution, inverse pseudo-blending, to maintain a smooth and natural interaction between the virtual character and the virtual environment; inverse pseudo-blending is a constraint-based motion editing technique that blends the motions enclosed in a tetrahedron by minimising the distances between the end-effector positions of the actual and blended motions. Additionally, to evaluate the proposed solution, we implemented an example-based application for interactive motion splicing based on specified constraints. Finally, the generated results show that the proposed solution can be beneficially applied to interactive applications where concurrent actions of the upper-body are desired

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Dynamic obstacle avoidance for real-time character animation

    Get PDF
    This paper proposes a novel method to control virtual characters in dynamic environments. A virtual character is animated by a locomotion and jumping engine, enabling production of continuous parameterized motions. At any time during runtime, flat obstacles (e.g. a puddle of water) can be created and placed in front of a character. The method first decides whether the character is able to get around or jump over the obstacle. Then the motion parameters are accordingly modified. The transition from locomotion to jump is performed with an improved motion blending technique. While traditional blending approaches let the user choose the transition time and duration manually, our approach automatically controls transitions between motion patterns whose parameters are not known in advance. In addition, according to the animation context, blending operations are executed during a precise period of time to preserve specific physical properties. This ensures coherent movements over the parameter space of the original input motions. The initial locomotion type and speed are smoothly varied with respect to the required jump type and length. This variation is carefully computed in order to place the take-off foot as close to the created obstacle as possibl

    Uniqueness of human running coordination: The integration of modern and ancient evolutionary innovations

    Get PDF
    Running is a pervasive activity across human cultures and a cornerstone of contemporary health, fitness and sporting activities. Yet for the overwhelming predominance of human existence running was an essential prerequisite for survival. A means to hunt, and a means to escape when hunted. In a very real sense humans have evolved to run. Yet curiously, perhaps due to running’s cultural ubiquity and the natural ease with which we learn to run, we rarely consider the uniqueness of human bipedal running within the animal kingdom. Our unique upright, single stance, bouncing running gait imposes a unique set of coordinative difficulties. Challenges demanding we precariously balance our fragile brains in the very position where they are most vulnerable to falling injury while simultaneously retaining stability, steering direction of travel, and powering the upcoming stride: all within the abbreviated time-frames afforded by short, violent ground contacts separated by long flight times. These running coordination challenges are solved through the tightly-integrated blending of primitive evolutionary legacies, conserved from reptilian and vertebrate lineages, and comparatively modern, more exclusively human, innovations. The integrated unification of these top-down and bottom-up control processes bestows humans with an agile control system, enabling us to readily modulate speeds, change direction, negotiate varied terrains and to instantaneously adapt to changing surface conditions. The seamless integration of these evolutionary processes is facilitated by pervasive, neural and biological, activity-dependent adaptive plasticity. Over time, and with progressive exposure, this adaptive plasticity shapes neural and biological structures to best cope with regularly imposed movement challenges. This pervasive plasticity enables the gradual construction of a robust system of distributed coordinated control, comprised of processes that are so deeply collectively entwined that describing their functionality in isolation obscures their true irrevocably entangled nature. Although other species rely on a similar set of coordinated processes to run, the bouncing bipedal nature of human running presents a specific set of coordination challenges, solved using a customized blend of evolved solutions. A deeper appreciation of the foundations of the running coordination phenomenon promotes conceptual clarity, potentially informing future advances in running training and running-injury rehabilitation interventions

    An automatic tool to facilitate authoring animation blending in game engines

    Get PDF
    Achieving realistic virtual humans is crucial in virtual reality applications and video games. Nowadays there are software and game development tools, that are of great help to generate and simulate characters. They offer easy to use GUIs to create characters by dragging and drooping features, and making small modifications. Similarly, there are tools to create animation graphs and setting blending parameters among others. Unfortunately, even though these tools are relatively user friendly, achieving natural animation transitions is not straight forward and thus non-expert users tend to spend a large amount of time to generate animations that are not completely free of artefacts. In this paper we present a method to automatically generate animation blend spaces in Unreal engine, which offers two advantages: the first one is that it provides a tool to evaluate the quality of an animation set, and the second one is that the resulting graph does not depend on user skills and it is thus not prone to user errors.Peer ReviewedPostprint (author's final draft

    Data-driven techniques for animating virtual characters

    Get PDF
    One of the key goals of current research in data-driven computer animation is the synthesis of new motion sequences from existing motion data. This thesis presents three novel techniques for synthesising the motion of a virtual character from existing motion data and develops a framework of solutions to key character animation problems. The first motion synthesis technique presented is based on the character’s locomotion composition process. This technique examines the ability of synthesising a variety of character’s locomotion behaviours while easily specified constraints (footprints) are placed in the three-dimensional space. This is achieved by analysing existing motion data, and by assigning the locomotion behaviour transition process to transition graphs that are responsible for providing information about this process. However, virtual characters should also be able to animate according to different style variations. Therefore, a second technique to synthesise real-time style variations of character’s motion. A novel technique is developed that uses correlation between two different motion styles, and by assigning the motion synthesis process to a parameterised maximum a posteriori (MAP) framework retrieves the desire style content of the input motion in real-time, enhancing the realism of the new synthesised motion sequence. The third technique presents the ability to synthesise the motion of the character’s fingers either o↵-line or in real-time during the performance capture process. The advantage of both techniques is their ability to assign the motion searching process to motion features. The presented technique is able to estimate and synthesise a valid motion of the character’s fingers, enhancing the realism of the input motion. To conclude, this thesis demonstrates that these three novel techniques combine in to a framework that enables the realistic synthesis of virtual character movements, eliminating the post processing, as well as enabling fast synthesis of the required motion

    Data-driven synthesis of realistic human motion using motion graphs

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2014.Thesis (Master's) -- Bilkent University, 2014.Includes bibliographical references leaves 53-56.Realistic human motions is an essential part of diverse range of media, such as feature films, video games and virtual environments. Motion capture provides realistic human motion data using sensor technology. However, motion capture data is not flexible. This drawback limits the utility of motion capture in practice. In this thesis, we propose a two-stage approach that makes the motion captured data reusable to synthesize new motions in real-time via motion graphs. Starting from a dataset of various motions, we construct a motion graph of similar motion segments and calculate the parameters, such as blending parameters, needed in the second stage. In the second stage, we synthesize a new human motion in realtime, depending on the blending techniques selected. Three different blending techniques, namely linear blending, cubic blending and anticipation-based blending, are provided to the user. In addition, motion clip preference approach, which is applied to the motion search algorithm, enable users to control the motion clip types in the result motion.Dirican, HüseyinM.S

    Modeling of predictive human movement coordination patterns for applications in computer graphics

    Get PDF
    The planning of human body movements is highly predictive. Within a sequence of actions, the anticipation of a final task goal modulates the individual actions within the overall pattern of motion. An example is a sequence of steps, which is coordinated with the grasping of an object at the end of the step sequence. Opposed to this property of natural human movements, real-time animation systems in computer graphics often model complex activities by a sequential concatenation of individual pre-stored movements, where only the movement before accomplishing the goal is adapted. We present a learning-based technique that models the highly adaptive predictive movement coordination in humans, illustrated for the example of the coordination of walking and reaching. The proposed system for the real-time synthesis of human movements models complex activities by a sequential concatenation of movements, which are approximated by the superposition of kinematic primitives that have been learned from trajectory data by anechoic demixing, using a step-wise regression approach. The kinematic primitives are then approximated by stable solutions of nonlinear dynamical systems (dynamic primitives) that can be embedded in control architectures. We present a control architecture that generates highly adaptive predictive full-body movements for reaching while walking with highly human-like appearance. We demonstrate that the generated behavior is highly robust, even in presence of strong perturbations that require the insertion of additional steps online in order to accomplish the desired task
    corecore