27 research outputs found

    Simulating virtual humans in networked virtual environments

    Get PDF
    In the past decade, networked virtual environments (NVEs) have been an increasingly active area of research, with the first commercial systems emerging recently. Graphical and behavioral representation of users within such systems is a particularly important issue that has lagged in development behind other issues such as network architectures and space structuring. We expose the importance of using virtual humans within these systems and provide a brief overview of several virtual human technologies used in particular for simulation of crowds. As the main technical contribution, the paper presents the integration of these technologies with the COVEN-DIVE platform, the extension of the DIVE system developed within the COVEN project. In conjunction with this, we present our contributions through the COVEN project to the MPEG-4 standard concerning the representation of virtual human

    Dynamic obstacle avoidance for real-time character animation

    Get PDF
    This paper proposes a novel method to control virtual characters in dynamic environments. A virtual character is animated by a locomotion and jumping engine, enabling production of continuous parameterized motions. At any time during runtime, flat obstacles (e.g. a puddle of water) can be created and placed in front of a character. The method first decides whether the character is able to get around or jump over the obstacle. Then the motion parameters are accordingly modified. The transition from locomotion to jump is performed with an improved motion blending technique. While traditional blending approaches let the user choose the transition time and duration manually, our approach automatically controls transitions between motion patterns whose parameters are not known in advance. In addition, according to the animation context, blending operations are executed during a precise period of time to preserve specific physical properties. This ensures coherent movements over the parameter space of the original input motions. The initial locomotion type and speed are smoothly varied with respect to the required jump type and length. This variation is carefully computed in order to place the take-off foot as close to the created obstacle as possibl

    A Motion Control Scheme for Animating Expressive Arm Movements

    Get PDF
    Current methods for figure animation involve a tradeoff between the level of realism captured in the movements and the ease of generating the animations. We introduce a motion control paradigm that circumvents this tradeoff-it provides the ability to generate a wide range of natural-looking movements with minimal user labor. Effort, which is one part of Rudolf Laban\u27s system for observing and analyzing movement, describes the qualitative aspects of movement. Our motion control paradigm simplifies the generation of expressive movements by proceduralizing these qualitative aspects to hide the non-intuitive, quantitative aspects of movement. We build a model of Effort using a set of kinematic movement parameters that defines how a figure moves between goal keypoints. Our motion control scheme provides control through Effort\u27s four dimensional system of textual descriptors, providing a level of control thus far missing from behavioral animation systems and offering novel specification and editing capabilities on top of traditional keyframing and inverse kinematics methods. Since our Effort model is inexpensive computationally, Effort-based motion control systems can work in real-time. We demonstrate our motion control scheme by implementing EMOTE (Expressive MOTion Engine), a character animation module for expressive arm movements. EMOTE works with inverse kinematics to control the qualitative aspects of end-effector specified movements. The user specifies general movements by entering a sequence of goal positions for each hand. The user then expresses the essence of the movement by adjusting sliders for the Effort motion factors: Space, Weight, Time, and Flow. EMOTE produces a wide range of expressive movements, provides an easy-to-use interface (that is more intuitive than joint angle interpolation curves or physical parameters), features interactive editing, and real-time motion generation

    Dynamic Obstacle Clearing for Real-time Character Animation

    Get PDF
    This paper proposes a novel method to control virtual characters in dynamic environments. A virtual character is animated by a locomotion and jumping engine, enabling production of continuous parameterized motions. At any time during runtime, flat obstacles (e.g. a puddle of water) can be created and placed in front of a character. The method first decides whether the character is able to get around or jump over the obstacle. Then the motion parameters are accordingly modified. The transition from locomotion to jump is performed with an improved motion blending technique. While traditional blending approaches let the user choose the transition time and duration manually, our approach automatically controls transitions between motion patterns whose parameters are not known in advance. In addition, according to the animation context, blending operations are executed during a precise period of time to preserve specific physical properties. This ensures coherent movements over the parameter space of the original input motions. The initial locomotion type and speed are smoothly varied with respect to the required jump type and length. This variation is carefully computed in order to place the take-off foot as close to the created obstacle as possible

    The Cadaver in the Machine: The Social Practices of Measurement and Validation in Motion Capture Technology

    Full text link
    Motion capture systems, used across various domains, make body representations concrete through technical processes. We argue that the measurement of bodies and the validation of measurements for motion capture systems can be understood as social practices. By analyzing the findings of a systematic literature review (N=278) through the lens of social practice theory, we show how these practices, and their varying attention to errors, become ingrained in motion capture design and innovation over time. Moreover, we show how contemporary motion capture systems perpetuate assumptions about human bodies and their movements. We suggest that social practices of measurement and validation are ubiquitous in the development of data- and sensor-driven systems more broadly, and provide this work as a basis for investigating hidden design assumptions and their potential negative consequences in human-computer interaction.Comment: 34 pages, 9 figures. To appear in the 2024 ACM CHI Conference on Human Factors in Computing Systems (CHI '24

    3D face modelling from sparse data

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Imitation and social learning for synthetic characters

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (p. 137-149).We want to build animated characters and robots capable of rich social interactions with humans and each other, and who are able to learn by observing those around them. An increasing amount of evidence suggests that, in human infants, the ability to learn by watching others, and in particular, the ability to imitate, could be crucial precursors to the development of appropriate social behavior, and ultimately the ability to reason about the thoughts, intents, beliefs, and desires of others. We have created a number of imitative characters and robots, the latest of which is Max T. Mouse, an anthropomorphic animated mouse character who is able to observe the actions he sees his friend Morris Mouse performing, and compare them to the actions he knows how to perform himself. This matching process allows Max to accurately imitate Morris's gestures and actions, even when provided with limited synthetic visual input. Furthermore, by using his own perception, motor, and action systems as models for the behavioral and perceptual capabilities of others (a process known as Simulation Theory in the cognitive literature), Max can begin to identify simple goals and motivations for Morris's behavior, an important step towards developing characters with a full theory of mind. Finally, Max can learn about unfamiliar objects in his environment, such as food and toys, by observing and correctly interpreting Morris's interactions with these objects, demonstrating his ability to take advantage of socially acquired information.by Daphna Buchsbaum.S.M

    Motion Planning : from Digital Actors to Humanoid Robots

    Get PDF
    Le but de ce travail est de développer des algorithmes de planification de mouvement pour des figures anthropomorphes en tenant compte de la géométrie, de la cinématique et de la dynamique du mécanisme et de son environnement. Par planification de mouvement, on entend la capacité de donner des directives à un niveau élevé et de les transformer en instructions de bas niveau qui produiront une séquence de valeurs articulaires qui reproduissent les mouvements humains. Ces instructions doivent considérer l'évitement des obstacles dans un environnement qui peut être plus au moins contraint. Ceci a comme consequence que l'on peut exprimer des directives comme “porte ce plat de la table jusqu'ac'estu coin du piano”, qui seront ensuite traduites en une série de buts intermédiaires et de contraintes qui produiront les mouvements appropriés des articulations du robot, de façon a effectuer l'action demandée tout en evitant les obstacles dans la chambre. Nos algorithmes se basent sur l'observation que les humains ne planifient pas des mouvements précis pour aller à un endroit donné. On planifie grossièrement la direction de marche et, tout en avançant, on exécute les mouvements nécessaires des articulations afin de nous mener à l'endroit voulu. Nous avons donc cherché à concevoir des algorithmes au sein d'un tel paradigme, algorithmes qui: 1. Produisent un chemin sans collision avec une version réduite du mécanisme et qui le mènent au but spécifié. 2. Utilisent les contrôleurs disponibles pour générer un mouvement qui assigne des valeurs à chacune des articulations du mécanisme pour suivre le chemin trouvé précédemment. 3. Modifient itérativement ces trajectoires jusqu'à ce que toutes les contraintes géométriques, cinématiques et dynamiques soient satisfaites. Dans ce travail nous appliquons cette approche à trois étages au problème de la planification de mouvements pour des figures anthropomorphes qui manipulent des objets encombrants tout en marchant. Dans le processus, plusieurs problèmes intéressants, ainsi que des propositions pour les résoudre, sont présentés. Ces problèmes sont principalement l'évitement tri-dimensionnel des obstacles, la manipulation des objets à deux mains, la manipulation coopérative des objets et la combinaison de comportements hétérogènes. La contribution principale de ce travail est la modélisation du problème de la génération automatique des mouvements de manipulation et de locomotion. Ce modèle considère les difficultés exprimées ci dessus, dans les contexte de mécanismes bipèdes. Trois principes fondent notre modèle: une décomposition fonctionnelle des membres du mécanisme, un modèle de manipulation coopérative et, un modéle simplifié des facultés de déplacement du mécanisme dans son environnement.Ce travail est principalement et surtout, un travail de synthèse. Nous nous servons des techniques disponibles pour commander la locomotion des mécanismes bipèdes (contrôleurs) provenant soit de l'animation par ordinateur, soit de la robotique humanoïde, et nous les relions dans un planificateur des mouvements original. Ce planificateur de mouvements est agnostique vis-à-vis du contrôleur utilisé, c'est-à-dire qu'il est capable de produire des mouvements libres de collision avec n'importe quel contrôleur tandis que les entrées et sorties restent compatibles. Naturellement, l'exécution de notre planificateur dépend en grand partie de la qualité du contrôleur utilisé. Dans cette thèse, le planificateur de mouvement est relié à différents contrôleurs et ses bonnes performances sont validées avec des mécanismes différents, tant virtuels que physiques. Ce travail à été fait dans le cadre des projets de recherche communs entre la France, la Russie et le Japon, où nous avons fourni le cadre de planification de mouvement à ses différents contrôleurs. Plusieurs publications issues de ces collaborations ont été présentées dans des conférences internationales. Ces résultats sont compilés et présentés dans cette thèse, et le choix des techniques ainsi que les avantages et inconvénients de notre approche sont discutés. ABSTRACT : The goal of this work is to develop motion planning algorithms for human-like figures taking into account the geometry, kinematics and dynamics of the mechanism and its environment. By motion planning it is understood the ability to specify high-level directives and transform them into low-level instructions for the articulations of the human-like figure. This is usually done while considering obstacle avoidance within the environment. This results in one being able to express directives as “carry this plate from the table to the piano corner” and have them translate into a series of goals and constraints that result in the pertinent motions from the robot's articulations in such a way as to carry out the action while avoiding collisions with the obstacles in the room. Our algorithms are based on the observation that humans do not plan their exact motions when getting to a location. We roughly plan our direction and, as we advance, we execute the motions needed to get to the desired place. This has led us to design algorithms that: 1. Produce a rough collision free path that takes a simplified model of the mechanism to the desired location. 2. Use available controllers to generate a trajectory that assigns values to each of the mechanism's articulations to follow the path. 3. Modify iteratively these trajectories until all the geometric, kinematic and dynamic constraints of the problem are satisfied.Throughout this work, we apply this three-stage approach with the problem of generating motions for human-like figures that manipulate bulky objects while walking. In the process, several interesting problems and their solution are brought into focus. These problems are, three- imensional collision avoidance, two-hand object manipulation, cooperative manipulation among several characters or robots and the combination of different behaviors. The main contribution of this work is the modeling of the automatic generation of cooperative manipulation motions. This model considers the above difficulties, all in the context of bipedal walking mechanisms. Three principles inform the model: a functional decomposition of the mechanism's limbs, a model for cooperative manipulation and, a simplified model to represent the mechanism when generating the rough path. This work is mainly and above all, one of synthesis. We make use of available techniques for controlling locomotion of bipedal mechanisms (controllers), from the fields of computer graphics and robotics, and connect them to a novel motion planner. This motion planner is controller-agnostic, that is, it is able to produce collision-free motions with any controller, despite whatever errors introduced by the controller itself. Of course, the performance of our motion planner depends on the quality of the used controller. In this thesis, the motion planner, connected to different controllers, is used and tested in different mechanisms, both virtual and physical. This in the context of different research projects in France, Russia and Japan, where we have provided the motion planning framework to their controllers. Several papers in peer-reviewed international conferences have resulted from these collaborations. The present work compiles these results and provides a more comprehensive and detailed depiction of the system and its benefits, both when applied to different mechanisms and compared to alternative approache
    corecore