7 research outputs found

    Robust on-line adaptive footplant detection and enforcement for locomotion

    Get PDF
    A common problem in virtual character computer animation concerns the preservation of the basic foot-floor constraint (or footplant), consisting in detecting it before enforcing it. This paper describes a system capable of generating motion while continuously preserving the footplants for a realtime, dynamically evolving context. This system introduces a constraint detection method that improves classical techniques by adaptively selecting threshold values according to motion type and quality. The footplants are then enforced using a numerical inverse kinematics solver. As opposed to previous approaches, we define the footplant by attaching to it two effectors whose position at the beginning of the constraint can be modified, in order to place the foot on the ground, for example. However, the corrected posture at the constraint beginning is needed before it starts to ensure smoothness between the unconstrained and constrained states. We, therefore, present a new approach based on motion anticipation, which computes animation postures in advance, according to time-evolving motion parameters, such as locomotion speed and type. We illustrate our on-line approach with continuously modified locomotion patterns, and demonstrate its ability to correct motion artifacts, such as foot sliding, to change the constraint position and to modify from a straight to a curved walk motio

    Imitation and social learning for synthetic characters

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (p. 137-149).We want to build animated characters and robots capable of rich social interactions with humans and each other, and who are able to learn by observing those around them. An increasing amount of evidence suggests that, in human infants, the ability to learn by watching others, and in particular, the ability to imitate, could be crucial precursors to the development of appropriate social behavior, and ultimately the ability to reason about the thoughts, intents, beliefs, and desires of others. We have created a number of imitative characters and robots, the latest of which is Max T. Mouse, an anthropomorphic animated mouse character who is able to observe the actions he sees his friend Morris Mouse performing, and compare them to the actions he knows how to perform himself. This matching process allows Max to accurately imitate Morris's gestures and actions, even when provided with limited synthetic visual input. Furthermore, by using his own perception, motor, and action systems as models for the behavioral and perceptual capabilities of others (a process known as Simulation Theory in the cognitive literature), Max can begin to identify simple goals and motivations for Morris's behavior, an important step towards developing characters with a full theory of mind. Finally, Max can learn about unfamiliar objects in his environment, such as food and toys, by observing and correctly interpreting Morris's interactions with these objects, demonstrating his ability to take advantage of socially acquired information.by Daphna Buchsbaum.S.M

    Interactive techniques for motion deformation of articulated figures using prioritized constraints

    Get PDF
    Convincingly animating virtual humans has become of great interest in many fields since recent years. In computer games for example, virtual humans often are the main characters. Failing to realistically animate them may wreck all previous efforts made to provide the player with an immersion feeling. At the same time, computer generated movies have become very popular and thus have increased the demand for animation realism. Indeed, virtual humans are now the new stars in movies like Final Fantasy or Shrek, or are even used for special effects in movies like Matrix. In this context, the virtual humans animations not only need to be realistic as for computer games, but really need to be expressive as for real actors. While creating animations from scratch is still widespread, it demands artistics skills and hours if not days to produce few seconds of animation. For these reasons, there has been a growing interest for motion capture: instead of creating a motion, the idea is to reproduce the movements of a live performer. However, motion capture is not perfect and still needs improvements. Indeed, the motion capture process involves complex techniques and equipments. This often results in noisy animations which must be edited. Moreover, it is hard to exactly foresee the final motion. For example, it often happens that the director of a movie decides to change the script. The animators then have to change part or the whole animation. The aim of this thesis is then to provide animators with interactive tools helping them to easily and rapidly modify preexisting animations. We first present our Inverse Kinematics solver used to enforce kinematic constraints at each time of an animation. Afterward, we propose a motion deformation framework offering the user a way to specify prioritized constraints and to edit an initial animation so that it may be used in a new context (characters, environment,etc). Finally, we introduce a semi-automatic algorithm to extract important motion features from motion capture animation which may serve as a first guess for the animators when specifying important characteristics an initial animation should respect

    On-line locomotion synthesis for virtual humans

    Get PDF
    Ever since the development of Computer Graphics in the industrial and academic worlds in the seventies, public knowledge and expertise have grown in a tremendous way, notably because of the increasing fascination for Computer Animation. This specific field of Computer Graphics gathers numerous techniques, especially for the animation of characters or virtual humans in movies and video games. To create such high-fidelity animations, a particular interest has been dedicated to motion capture, a technology which allows to record the 3D movement of a live performer. The resulting realism motion is convincing. However, this technique offers little control to animators, as the recorded motion can only be played back. Recently, many advances based on motion capture have been published, concerning slight but precise modifications of an original motion or the parameterization of large motion databases. The challenge consists in combining motion realism with an intuitive on-line motion control, while preserving real-time performances. In the first part of this thesis, we would like to add a brick in the wall of motion parameterization techniques based on motion capture, by introducing a generic motion modeling for locomotion and jump activities. For this purpose, we simplify the motion representation using a statistical method in order to facilitate the elaboration of an efficient parametric model. This model is structured in hierarchical levels, allowing an intuitive motion synthesis with high-level parameters. In addition, we present a space and time normalization process to adapt our model to characters of various sizes. In the second part, we integrate this motion modeling in an animation engine, thus allowing for the generation of a continuous stream of motion for virtual humans. We provide two additional tools to improve the flexibility of our engine. Based on the concept of motion anticipation, we first introduce an on-line method for detecting and enforcing foot-ground constraints. Hence, a straight line walking motion can be smoothly modified to a curved one. Secondly, we propose an approach for the automatic and coherent synthesis of transitions from locomotion to jump (and inversely) motions, by taking into account their respective properties. Finally, we consider the interaction of a virtual human with its environment. Given initial and final conditions set on the locomotion speed and foot positions, we propose a method which computes the corresponding trajectory. To illustrate this method, we propose a case study which mirrors as closely as possible the behavior of a human confronted with an obstacle: at any time, obstacles may be interactively created in front of a moving virtual human. Our method computes a trajectory allowing the virtual human to precisely jump over the obstacle in an on-line manner

    Motion Abstraction and Mapping with Spatial Constraints

    Get PDF
    ion and Mapping with Spatial Constraints Rama Bindiganavale and Norman I. Badler Computer and Information Science Department University of Pennsylvania, PA 19104-6389, USA Abstract. A new technique is introduced to abstract and edit motion capture data with spatial constraints. Spatial proximities of end-effectors with tagged objects during zero-crossings in acceleration space are used to isolate significant events and abstract constraints from an agent's action. The abstracted data is edited and applied to another agent of a different anthropometric size and a similar action is executed while maintaining the constraints. This technique is specifically useful for actions involving interactions of a human agent with itself and other objects. 1 Introduction When one person mimics the actions of another, the two actions may be similar but not exact. The dissimilarities are mainly due to the differences in sizes between the two people, as well as individual performance or stylistic varia..
    corecore