8 research outputs found

    Device-based decision-making for adaptation of three-dimensional content

    Get PDF
    The goal of this research was the creation of an adaptation mechanism for the delivery of three-dimensional content. The adaptation of content, for various network and terminal capabilities - as well as for different user preferences, is a key feature that needs to be investigated. Current state-of-the art research of the adaptation shows promising results for specific tasks and limited types of content, but is still not well-suited for massive heterogeneous environments. In this research, we present a method for transmitting adapted three-dimensional content to multiple target devices. This paper presents some theoretical and practical methods for adapting three-dimensional content, which includes shapes and animation. We also discuss practical details of the integration of our methods into MPEG-21 and MPEG-4 architecture

    Coarticulation and speech synchronization in MPEG-4 based facial animation

    Get PDF
    In this paper, we present a novel coarticulation and speech synchronization framework compliant with MPEG-4 facial animation. The system we have developed uses MPEG-4 facial animation standard and other development to enable the creation, editing and playback of high resolution 3D models; MPEG-4 animation streams; and is compatible with well-known related systems such as Greta and Xface. It supports text-to-speech for dynamic speech synchronization. The framework enables real-time model simplification using quadric-based surfaces. Our coarticulation approach provides realistic and high performance lip-sync animation, based on Cohen-Massaro’s model of coarticulation adapted to MPEG-4 facial animation (FA) specification. The preliminary experiments show that the coarticulation technique we have developed gives overall good and promising results when compared to related techniques

    Ubiquitous Scalable Graphics: An End-to-End Framework using Wavelets

    Get PDF
    Advances in ubiquitous displays and wireless communications have fueled the emergence of exciting mobile graphics applications including 3D virtual product catalogs, 3D maps, security monitoring systems and mobile games. Current trends that use cameras to capture geometry, material reflectance and other graphics elements means that very high resolution inputs is accessible to render extremely photorealistic scenes. However, captured graphics content can be many gigabytes in size, and must be simplified before they can be used on small mobile devices, which have limited resources, such as memory, screen size and battery energy. Scaling and converting graphics content to a suitable rendering format involves running several software tools, and selecting the best resolution for target mobile device is often done by trial and error, which all takes time. Wireless errors can also affect transmitted content and aggressive compression is needed for low-bandwidth wireless networks. Most rendering algorithms are currently optimized for visual realism and speed, but are not resource or energy efficient on mobile device. This dissertation focuses on the improvement of rendering performance by reducing the impacts of these problems with UbiWave, an end-to-end Framework to enable real time mobile access to high resolution graphics using wavelets. The framework tackles the issues including simplification, transmission, and resource efficient rendering of graphics content on mobile device based on wavelets by utilizing 1) a Perceptual Error Metric (PoI) for automatically computing the best resolution of graphics content for a given mobile display to eliminate guesswork and save resources, 2) Unequal Error Protection (UEP) to improve the resilience to wireless errors, 3) an Energy-efficient Adaptive Real-time Rendering (EARR) heuristic to balance energy consumption, rendering speed and image quality and 4) an Energy-efficient Streaming Technique. The results facilitate a new class of mobile graphics application which can gracefully adapt the lowest acceptable rendering resolution to the wireless network conditions and the availability of resources and battery energy on mobile device adaptively

    A head model with anatomical structure for facial modelling and animation

    Get PDF
    In this dissertation, I describe a virtual head model with anatomical structure. The model is animated in a physical-based manner by use of muscle contractions that in turn cause skin deformations; the simulation is efficient enough to achieve real-time frame rates on current PC hardware. Construction of head models is eased in my approach by deriving new models from a prototype, employing a deformation method that reshapes the complete virtual head structure. Without additional modeling tasks, this results in an immediately animatable model. The general deformation method allows for several applications such as adaptation to individual scan data for creation of animated head models of real persons. The basis for the deformation method is a set of facial feature points, which leads to other interesting uses when this set is chosen according to an anthropometric standard set of facial landmarks: I present algorithms for simulation of human head growth and reconstruction of a face from a skull.In dieser Dissertation beschreibe ich ein nach der menschlichen Anatomie strukturiertes virtuelles Kopfmodell. Dieses Modell wird physikbasiert durch Muskelkontraktionen bewegt, die wiederum Hautdeformationen hervorrufen; die Simulation ist effizient genug, um Echtzeitanimation auf aktueller PC-Hardware zu ermöglichen. Die Konstruktion eines Kopfmodells wird in meinem Ansatz durch Ableitung von einem Prototypen erleichtert, wozu eine Deformationstechnik verwendet wird, die die gesamte Struktur des virtuellen Kopfes transformiert. Ein vollständig animierbares Modell entsteht so ohne weitere Modellierungsschritte. Die allgemeine Deformationsmethode gestattet eine Vielzahl von Anwendungen, wie beispielsweise die Anpassung an individuelle Scandaten für die Erzeugung von animierten Kopfmodellen realer Personen. Die Deformationstechnik basiert auf einer Menge von Markierungspunkten im Gesicht, was zu weiteren interessanten Einsatzgebieten führt, wenn diese mit Standard- Meßpunkten aus der Anthropometrie identifiziert werden: Ich stelle Algorithmen zur Simulation des menschlichen Kopfwachstums sowie der Rekonstruktion eines Gesichtes aus Schädeldaten vor

    Calculating the curvature shape characteristics of the human body from 3D scanner data.

    Get PDF
    In the recent years, there have been significant advances in the development and manufacturing of 3D scanners capable of capturing detailed (external) images of whole human bodies. Such hardware offers the opportunity to collect information that could be used to describe, interpret and analyse the shape of the human body for a variety of applications where shape information plays a vital role (e.g. apparel sizing and customisation; medical research in fields such as nutrition, obesity/anorexia and perceptive psychology; ergonomics for vehicle and furniture design). However, the representations delivered by such hardware typically consist of unstructured or partially structured point clouds, whereas it would be desirable to have models that allow shape-related information to be more immediately accessible. This thesis describes a method of extracting the differential geometry properties of the body surface from unorganized point cloud datasets. In effect, this is a way of constructing curvature maps that allows the detection on the surface of features that are deformable (such as ridges) rather than reformable under certain transformations. Such features could subsequently be used to interpret the topology of a human body and to enable classification according to its shape, rather than its size (as is currently the standard practice for many of the applications concemed). The background, motivation and significance of this research are presented in chapter one. Chapter two is a literature review describing the previous and current attempts to model 3D objects in general and human bodies in particular, as well as the mathematical and technical issues associated with the modelling. Chapter three presents an overview of: the methodology employed throughout the research; the assumptions regarding the data to be processed; and the strategy for evaluating the results for each stage of the methodology. Chapter four describes an algorithm (and some variations) for approximating the local surface geometry around a given point of the input data set by means of a least-squares minimization. The output of such an algorithm is a surface patch described in an analytic (implicit) form. This is necessary for the next step described below. The case is made for using implicit surfaces rather than more popular 3D surface representations such as parametric forms or height functions. Chapter five describes the processing needed for calculating curvature-related characteristics for each point of the input surface. This utilises the implicit surface patches generated by the algorithm described in the previous chapter, and enables the construction of a "curvature map" of the original surface, which incorporates rich information such as the principal curvatures, shape indices and curvature directions. Chapter six describes a family of algorithms for calculating features such as ridges and umbilic points on the surface from the curvature map, in a manner that bypasses the problem of separating a vector field (i.e. the principal curvature directions) across the entire surface of an object. An alternative approach, using the focal surface information, is also considered briefly in comparison. The concluding chapter summarises the results from all steps of the processing and evaluates them in relation to the requirements set in chapter one. Directions for further research are also proposed

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Cognitive Foundations for Visual Analytics

    Full text link
    corecore