529 research outputs found

    Elckerlyc in practice - on the integration of a BML Realizer in real applications

    Get PDF
    Building a complete virtual human application from scratch is a daunting task, and it makes sense to rely on existing platforms for behavior generation. When building such an interactive application, one needs to be able to adapt and extend the capabilities of the virtual human offered by the platform, without having to make invasive modications to the platform itself. This paper describes how Elckerlyc, a novel platform for controlling a virtual human, offers these possibilities

    Investigating User Experience Using Gesture-based and Immersive-based Interfaces on Animation Learners

    Get PDF
    Creating animation is a very exciting activity. However, the long and laborious process can be extremely challenging. Keyframe animation is a complex technique that takes a long time to complete, as the procedure involves changing the poses of characters through modifying the time and space of an action, called frame-by-frame animation. This involves the laborious, repetitive process of constantly reviewing results of the animation in order to make sure the movement-timing is accurate. A new approach to animation is required in order to provide a more intuitive animating experience. With the evolution of interaction design and the Natural User Interface (NUI) becoming widespread in recent years, a NUI-based animation system is expected to allow better usability and efficiency that would benefit animation. This thesis investigates the effectiveness of gesture-based and immersive-based interfaces as part of animation systems. A practice-based element of this research is a prototype of the hand gesture interface, which was created based on experiences from reflective practices. An experimental design is employed to investigate the usability and efficiency of gesture-based and immersive-based interfaces in comparison to the conventional GUI/WIMP interface application. The findings showed that gesture-based and immersive-based interfaces are able to attract animators in terms of the efficiency of the system. However, there was no difference in their preference for usability with the two interfaces. Most of our participants are pleasant with NUI interfaces and new technologies used in the animation process, but for detailed work and taking control of the application, the conventional GUI/WIMP is preferable. Despite the awkwardness of devising gesture-based and immersive-based interfaces for animation, the concept of the system showed potential for a faster animation process, an enjoyable learning system, and stimulating interest in a kinaesthetic learning experience

    Synchronizing Keyframe Facial Animation to Multiple Text-to-Speech Engines and Natural Voice with Fast Response Time

    Get PDF
    This thesis aims to create an automated lip-synchronization system for real-time applications. Specifically, the system is required to be fast, consist of a limited number of keyframes with small memory requirements, and create fluid and believable animations that synchronize with text-to-speech engines as well as raw voice data. The algorithms utilize traditional keyframe animation and a novel method of keyframe selection. Additionally, phoneme-to-keyframe mapping, synchronization, and simple blending rules are employed. The algorithms provide blending between keyframe images, borrow information from neighboring phonemes, accentuate phonemes b, p and m, differentiate between keyframes for phonemes with allophonic variations, and provide prosodromic variation by including emotion while speaking. The lip-sync animation synchronizes with multiple synthesized voices and human speech. A fast and versatile online real-time java chat interface is created to exhibit vivid facial animation. Results show that the animation algorithms are fast and show accurate lip-synchronization. Additionally, surveys showed that the animations are visually pleasing and improve speech understandability 96% of the time. Applications for this project include internet chat capabilities, interactive teaching of foreign languages, animated news broadcasting, enhanced game technology, and cell phone messaging

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    3D performance capture for facial animation

    Get PDF
    This work describes how a photogrammetry based 3D capture system can be used as an input device for animation. The 3D Dynamic Capture System is used to capture the motion of a human face, which is extracted from a sequence of 3D models captured at TV frame rate. Initially the positions of a set of landmarks on the face are extracted. These landmarks are then used to provide motion data in two different ways. First, a high level description of the movements is extracted, and these can be used as input to a procedural animation package (i.e. CreaToon). Second the landmarks can be used as registration points for a conformation process where the model to be animated is modified to match the captured model. This approach gives a new sequence of models, which have the structure of the drawn model but the movement of the captured sequence

    Markerless motion capture for 3D human model animation using depth camera

    Get PDF
    3D animation is created using keyframe based system in 3D animation software such as Blender and Maya. Due to the long time interval and the need of high expertise in 3D animation, motion capture devices were used as an alternative and Microsoft Kinect v2 sensor is one of them. This research analyses the capabilities of the Kinect sensor in producing 3D human model animations using motion capture and keyframe based animation system in reference to a live motion performance. The quality, time interval and cost of both animation results were compared. The experimental result shows that motion capture system with Kinect sensor consumed less time (only 2.6%) and cost (30%) in the long run (10 minutes of animation) compare to keyframe-based system, but it produced lower quality animation. This was due to the lack of body detection accuracy when there is obstruction. Moreover, the sensor’s constant assumption that the performer’s body faces forward made it unreliable to be used for a wide variety of movements. Furthermore, standard test defined in this research covers most body parts’ movements to evaluate other motion capture system

    Perancangan Animasi 3D “Rahmat Allah yang Terindah” dengan Menerapkan Metode Keyframe

    Get PDF
    3D animation is one of the applications of computer graphics. 3D animated films can be used as learning media to convey messages. In its implementations, there are still many 3D animated films that produce stiff and unrealistic movements. This is because there are deficiencies in the animation process. This problem is the background of this research. The goal to be achieved is to find the right method to produce animated movements that are not rigid so that they can convey the message well. The theme of this is islamic and mother love. The method used in the animating process is the keyframe method; method of making 3D animation that starts from one point to another until it becomes a unified visualization of object images. The design of 3D animation is done using blender software and is carried out in 8 stages, namely determining story design, storyboarding, character modeling, texturing, rigging, animating, rendering, and editing. This research produces a mp4 formatted 3D animated film with a duration of 90 seconds. The keyframing method is also proven to be effective and easy to apply to produce smooth animation movements if the interpolated value is reduced.Animasi 3D merupakan salah satu penerapan ilmu grafika komputer. Film animasi 3D dapat digunakan sebagai media pembelajaran untuk menyampaikan pesan. Dalam perancangannya, masih banyak film animasi 3D yang menghasilkan gerakan kaku dan tidak realistis. Hal ini disebabkan karena terdapat kekurangan pada proses animating. Masalah tersebut melatarbelakangi penelitian ini. Tujuan yang ingin dicapai adalah menemukan metode yang tepat guna menghasilkan gerakan animasi yang tidak kaku sehingga dapat menyampaikan pesan dengan baik. Penelitian ini mengangkat tema islami dan kasih sayang ibu. Metode yang digunakan dalam proses animating adalah metode keyframe; metode pembuatan animasi 3D yang dimulai dari satu point ke point lainnya hingga menjadi satu kesatuan visualisasi gambar object. Perancangan animasi 3D dilakukan menggunakan software blender dan dilakukan dalam 8 tahap yaitu penentuan desain cerita, penggambaran storyboard, character modelling, texturing, rigging, animating, rendering, dan editing. Penelitian ini menghasilkan film animasi 3D yang berdurasi 90 detik dalam format mp4. Metode keyframing juga terbukti efektif dan mudah diterapkan untuk menghasilkan gerakan animasi yang halus jika interpolated value nya diperkecil
    corecore