573 research outputs found

    Investigating User Experience Using Gesture-based and Immersive-based Interfaces on Animation Learners

    Get PDF
    Creating animation is a very exciting activity. However, the long and laborious process can be extremely challenging. Keyframe animation is a complex technique that takes a long time to complete, as the procedure involves changing the poses of characters through modifying the time and space of an action, called frame-by-frame animation. This involves the laborious, repetitive process of constantly reviewing results of the animation in order to make sure the movement-timing is accurate. A new approach to animation is required in order to provide a more intuitive animating experience. With the evolution of interaction design and the Natural User Interface (NUI) becoming widespread in recent years, a NUI-based animation system is expected to allow better usability and efficiency that would benefit animation. This thesis investigates the effectiveness of gesture-based and immersive-based interfaces as part of animation systems. A practice-based element of this research is a prototype of the hand gesture interface, which was created based on experiences from reflective practices. An experimental design is employed to investigate the usability and efficiency of gesture-based and immersive-based interfaces in comparison to the conventional GUI/WIMP interface application. The findings showed that gesture-based and immersive-based interfaces are able to attract animators in terms of the efficiency of the system. However, there was no difference in their preference for usability with the two interfaces. Most of our participants are pleasant with NUI interfaces and new technologies used in the animation process, but for detailed work and taking control of the application, the conventional GUI/WIMP is preferable. Despite the awkwardness of devising gesture-based and immersive-based interfaces for animation, the concept of the system showed potential for a faster animation process, an enjoyable learning system, and stimulating interest in a kinaesthetic learning experience

    The Stretch-Engine: A Method for Creating Exaggeration in Animation Through Squash and Stretch

    Get PDF
    Animators exaggerate character motion to emphasize personality and actions. Exaggeration is expressed by pushing a character’s pose, changing the action’s timing, or by changing a character’s form. This last method, referred to as squash and stretch, creates the most noticeable change in exaggeration. However, without practice, squash and stretch can adversely affect the animation. This work introduces a method to create exaggeration in motion by focusing solely on squash and stretch to control changes in a character’s form. It does this by displaying a limbs' path of motion and altering the shape of that path to create a change in the limb’s form. This paper provides information on tools that exist to create animation and exaggeration, then discusses the functionality and effectiveness of these tools and how they influenced the design of the Stretch-Engine. The Stretch-Engine is a prototype tool developed to demonstrate this approach and is designed to be integrated into an existing animation software, Maya. The Stretch-Engine contains a bipedal-humanoid rig with controls necessary for animation and the ability to squash and stretch. It can be accessed through a user interface that allows the animator to control squash and stretch by changing the shape of generated paths of motion. This method is then evaluated by comparing animations of realistic motion to versions created with the Stretch-Engine. These stretched versions displayed exaggerated results for their realistic counterparts, creating similar effects to Looney Tunes animation. This method fits within the animator’s workflow and helps new artists visualize and control squash and stretch to create exaggeration

    A Process for the Semi-Automated Generation of Life-Sized, Interactive 3D Character Models for Holographic Projection

    Get PDF
    By mixing digital data into the real world, Augmented Reality (AR) can deliver potent immersive and interactive experience to its users. In many application contexts, this requires the capability to deploy animated, high fidelity 3D character models. In this paper, we propose a novel approach to efficiently transform – using 3D scanning – an actor to a photorealistic, animated character. This generated 3D assistant must be able to move to perform recorded motion capture data, and it must be able to generate dialogue with lip sync to naturally interact with the users. The approach we propose for creating these virtual AR assistants utilizes photogrammetric scanning, motion capture, and free viewpoint video for their integration in Unity. We deploy the Occipital Structure sensor to acquire static high-resolution textured surfaces, and a Vicon motion capture system to track series of movements. The proposed capturing process consists of the steps scanning, reconstruction with Wrap 3 and Maya, editing texture maps to reduce artefacts with Photoshop, and rigging with Maya and Motion Builder to render the models fit for animation and lip-sync using LipSyncPro. We test the approach in Unity by scanning two human models with 23 captured animations each. Our findings indicate that the major factors affecting the result quality are environment setup, lighting, and processing constraints

    Modeling variation of human motion

    Get PDF
    The synthesis of realistic human motion with large variations and different styles has a growing interest in simulation applications such as the game industry, psychological experiments, and ergonomic analysis. The statistical generative models are used by motion controllers in our motion synthesis framework to create new animations for different scenarios. Data-driven motion synthesis approaches are powerful tools for producing high-fidelity character animations. With the development of motion capture technologies, more and more motion data are publicly available now. However, how to efficiently reuse a large amount of motion data to create new motions for arbitrary scenarios poses challenges, especially for unsupervised motion synthesis. This thesis presents a series of works that analyze and model the variations of human motion data. The goal is to learn statistical generative models to create any number of new human animations with rich variations and styles. The work of the thesis will be presented in three main chapters. We first explore how variation is represented in motion data. Learning a compact latent space that can expressively contain motion variation is essential for modeling motion data. We propose a novel motion latent space learning approach that can intrinsically tackle the spatialtemporal properties of motion data. Secondly, we present our Morphable Graph framework for human motion modeling and synthesis for assembly workshop scenarios. A series of studies have been conducted to apply statistical motion modeling and synthesis approaches for complex assembly workshop use cases. Learning the distribution of motion data can provide a compact representation of motion variations and convert motion synthesis tasks to optimization problems. Finally, we show how the style variations of human activities can be modeled with a limited number of examples. Natural human movements display a rich repertoire of styles and personalities. However, it is difficult to get enough examples for data-driven approaches. We propose a conditional variational autoencoder (CVAE) to combine large variations in the neutral motion database and style information from a limited number of examples.Die Synthese realistischer menschlicher Bewegungen mit großen Variationen und unterschiedlichen Stilen ist fĂŒr Simulationsanwendungen wie die Spieleindustrie, psychologische Experimente und ergonomische Analysen von wachsendem Interesse. Datengetriebene BewegungssyntheseansĂ€tze sind leistungsstarke Werkzeuge fĂŒr die Erstellung realitĂ€tsgetreuer Charakteranimationen. Mit der Entwicklung von Motion-Capture-Technologien sind nun immer mehr Motion-Daten öffentlich verfĂŒgbar. Die effiziente Wiederverwendung einer großen Menge von Motion-Daten zur Erstellung neuer Bewegungen fĂŒr beliebige Szenarien stellt jedoch eine Herausforderung dar, insbesondere fĂŒr die unĂŒberwachte Bewegungssynthesemethoden. Das Lernen der Verteilung von Motion-Daten kann eine kompakte ReprĂ€sentation von Bewegungsvariationen liefern und Bewegungssyntheseaufgaben in Optimierungsprobleme umwandeln. In dieser Dissertation werden eine Reihe von Arbeiten vorgestellt, die die Variationen menschlicher Bewegungsdaten analysieren und modellieren. Das Ziel ist es, statistische generative Modelle zu erlernen, um eine beliebige Anzahl neuer menschlicher Animationen mit reichen Variationen und Stilen zu erstellen. In unserem Bewegungssynthese-Framework werden die statistischen generativen Modelle von Bewegungscontrollern verwendet, um neue Animationen fĂŒr verschiedene Szenarien zu erstellen. Die Arbeit in dieser Dissertation wird in drei Hauptkapiteln vorgestellt. Wir untersuchen zunĂ€chst, wie Variation in Bewegungsdaten dargestellt wird. Das Erlernen eines kompakten latenten Raums, der Bewegungsvariationen ausdrucksvoll enthalten kann, ist fĂŒr die Modellierung von Bewegungsdaten unerlĂ€sslich. Wir schlagen einen neuartigen Ansatz zum Lernen des latenten Bewegungsraums vor, der die rĂ€umlich-zeitlichen Eigenschaften von Bewegungsdaten intrinsisch angehen kann. Zweitens stellen wir unser Morphable Graph Framework fĂŒr die menschliche Bewegungsmodellierung und -synthese fĂŒr Montage-Workshop- Szenarien vor. Es wurde eine Reihe von Studien durchgefĂŒhrt, um statistische Bewegungsmodellierungs und syntheseansĂ€tze fĂŒr komplexe AnwendungsfĂ€lle in MontagewerkstĂ€tten anzuwenden. Schließlich zeigen wir anhand einer begrenzten Anzahl von Beispielen, wie die Stilvariationen menschlicher AktivitĂ€ten modelliertwerden können. NatĂŒrliche menschliche Bewegungen weisen ein reiches Repertoire an Stilen und Persönlichkeiten auf. Es ist jedoch schwierig, genĂŒgend Beispiele fĂŒr datengetriebene AnsĂ€tze zu erhalten. Wir schlagen einen Conditional Variational Autoencoder (CVAE) vor, um große Variationen in der neutralen Bewegungsdatenbank und Stilinformationen aus einer begrenzten Anzahl von Beispielen zu kombinieren. Wir zeigen, dass unser Ansatz eine beliebige Anzahl von natĂŒrlich aussehenden Variationen menschlicher Bewegungen mit einem Ă€hnlichen Stil wie das Ziel erzeugen kann

    Applications of Gait Analysis Data Compression for 3D Character Animation

    Get PDF
    We investigate a streamlined method for compression, approximation and fast interpolation of gait analysis data using Catmull-Rom Splines. We are interested not only in raw compression, but also extracting the most useful data from an animation for subsequent manipulation. Our method allows compression approaching 85 percent while the resulting animation remains indistinguishable by humans from the original animation, resulting in significant memory savings, while the untransformed compressed animation has possible usefulness in gait retargeting

    Combined filtering and keyframe reduction for motion capture data

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2007.Thesis (Master's) -- Bilkent University, 2007.Includes bibliographical references leaves 36-38Two new methods for combined filtering and key-frame reduction of motion capture data are proposed. Filtering of motion capture data is necessary to eliminate any jitter introduced by a motion capture system. Although jitter removal is needed to obtain a more realistic animation, it may result in an oversmoothed motion data if it is not done properly. Key-frame reduction, on the other hand, allows animators to easily edit motion data by representing animation curves with a significantly smaller number of key frames. One of the proposed techniques achieves key frame reduction and jitter removal simultaneously by fitting a Hermite curve to motion capture data using dynamic programming. Another method is to use curve simplification algorithms on the motion capture data until the desired reduction is reached. In this research, the results of these algorithms are evaluated and compared. Both subjective and objective results are presented.Önder, OnurM.S
    • 

    corecore