1,591 research outputs found

    Elckerlyc in practice - on the integration of a BML Realizer in real applications

    Get PDF
    Building a complete virtual human application from scratch is a daunting task, and it makes sense to rely on existing platforms for behavior generation. When building such an interactive application, one needs to be able to adapt and extend the capabilities of the virtual human offered by the platform, without having to make invasive modications to the platform itself. This paper describes how Elckerlyc, a novel platform for controlling a virtual human, offers these possibilities

    Implementation of MPEG-4s Subdivision Surfaces Tools

    Get PDF
    This work is about the implementation of a MPEG-4 decoder for subdivision surfaces, which are powerful 3D paradigms allowing to compactly represent piecewise smooth surfaces. This study will take place in the framework of MPEG-4 AFX, the extension of the MPEG-4 standard including the subdivision surfaces. This document will introduce, with some details, the theory of subdivision surfaces in the two forms present in MPEG-4: plain and detailed/ wavelet subdivision surfaces. It will particularly concentrate on wavelet subdivision surfaces, which permit progressive 3D mesh compression

    Instruction document on multimedia formats:optimal accessibility of audio, video and images

    Get PDF
    We increasingly express ourselves through multimedia. Internet traffic already consists for the most part of audio and video. A variety of formats are used for this purpose, often without due consideration. This document provides a background for choices that can be made for making video and audio available. In this context, open standards are (at present) less common than closed standards. Nevertheless, open standards are more useful in terms of sustainable access to multimedia content. This document provides an insight into the relevant considerations to help you make the right choice when selecting formats

    LUCIA: An open source 3D expressive avatar for multimodal h.m.i.

    Get PDF
    LUCIA is an MPEG-4 facial animation system developed at ISTC-CNR . It works on standard Facial Animation Parameters and speaks with the Italian version of FESTIVAL TTS. To achieve an emotive/expressive talking head LUCIA was build from real human data physically extracted by ELITE optotracking movement analyzer. LUCIA can copy a real human by reproducing the movements of passive markers positioned on his face and recorded by the ELITE device or can be driven by an emotional XML tagged input text, thus realizing a true audio/visual emotive/expressive synthesis. Synchronization between visual and audio data is very important in order to create the correct WAV and FAP files needed for the animation. LUCIA\u27s voice is based on the ISTC Italian version of FESTIVAL-MBROLA packages, modified by means of an appropriate APML/VSML tagged language. LUCIA is available in two dif-ferent versions: an open source framework and the "work in progress" WebGL

    Coarticulation and speech synchronization in MPEG-4 based facial animation

    Get PDF
    In this paper, we present a novel coarticulation and speech synchronization framework compliant with MPEG-4 facial animation. The system we have developed uses MPEG-4 facial animation standard and other development to enable the creation, editing and playback of high resolution 3D models; MPEG-4 animation streams; and is compatible with well-known related systems such as Greta and Xface. It supports text-to-speech for dynamic speech synchronization. The framework enables real-time model simplification using quadric-based surfaces. Our coarticulation approach provides realistic and high performance lip-sync animation, based on Cohen-Massaro’s model of coarticulation adapted to MPEG-4 facial animation (FA) specification. The preliminary experiments show that the coarticulation technique we have developed gives overall good and promising results when compared to related techniques

    A model for adapting 3D graphics based on scalable coding, real-time simplification and remote rendering

    Get PDF
    Most current multiplayer 3D games can only be played on dedicated platforms, requiring specifically designed content and communication over a predefined network. To overcome these limitations, the OLGA (On-Line GAming) consortium has devised a framework to develop distributive, multiplayer 3D games. Scalability at the level of content, platforms and networks is exploited to achieve the best trade-offs between complexity and quality

    Representation and coding of 3D video data

    Get PDF
    Livrable D4.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.1 du projet

    FacEMOTE: Qualitative Parametric Modifiers for Facial Animations

    Get PDF
    We propose a control mechanism for facial expressions by applying a few carefully chosen parametric modifications to preexisting expression data streams. This approach applies to any facial animation resource expressed in the general MPEG-4 form, whether taken from a library of preset facial expressions, captured from live performance, or entirely manually created. The MPEG-4 Facial Animation Parameters (FAPs) represent a facial expression as a set of parameterized muscle actions, given as intensity of individual muscle movements over time. Our system varies expressions by changing the intensities and scope of sets of MPEG-4 FAPs. It creates variations in “expressiveness” across the face model rather than simply scale, interpolate, or blend facial mesh node positions. The parameters are adapted from the Effort parameters of Laban Movement Analysis (LMA); we developed a mapping from their values onto sets of FAPs. The FacEMOTE parameters thus perturb a base expression to create a wide range of expressions. Such an approach could allow real-time face animations to change underlying speech or facial expression shapes dynamically according to current agent affect or user interaction needs

    Source coding for transmission of reconstructed dynamic geometry: a rate-distortion-complexity analysis of different approaches

    Get PDF
    Live 3D reconstruction of a human as a 3D mesh with commodity electronics is becoming a reality. Immersive applications (i.e. cloud gaming, tele-presence) benefit from effective transmission of such content over a bandwidth limited link. In this paper we outline different approaches for compressing live reconstructed mesh geometry based on distributing mesh reconstruction functions between sender and receiver. We evaluate rate-performance-complexity of different configurations. First, we investigate 3D mesh compression methods (i.e. dynamic/static) from MPEG-4. Second, we evaluate the option of using octree based point cloud compression and receiver side surface reconstruction
    • …
    corecore