111 research outputs found

    Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Get PDF
    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented

    Investigation and evaluation of a computer program to minimize three-dimensional flight time tracks

    Get PDF
    The program for the DC 8-D3 flight planning was slightly modified for the three dimensional flight planning for DC 10 aircrafts. Several test runs of the modified program over the North Atlantic and North America were made for verifying the program. While geopotential height and temperature were used in a previous program as meteorological data, the modified program uses wind direction and speed and temperature received from the National Weather Service. A scanning program was written to collect required weather information from the raw data received in a packed decimal format. Two sets of weather data, the 12-hour forecast and 24-hour forecast based on 0000 GMT, are used for dynamic processes in testruns. In order to save computing time only the weather data of the North Atlantic and North America is previously stored in a PCF file and then scanned one by one

    Improvements on a simple muscle-based 3D face for realistic facial expressions

    Get PDF
    Facial expressions play an important role in face-to-face communication. With the development of personal computers capable of rendering high quality graphics, computer facial animation has produced more and more realistic facial expressions to enrich human-computer communication. In this paper, we present a simple muscle-based 3D face model that can produce realistic facial expressions in real time. We extend Waters' (1987) muscle model to generate bulges and wrinkles and to improve the combination of multiple muscle actions. In addition, we present techniques to reduce the computation burden on the muscle mode

    Socially communicative characters for interactive applications

    Get PDF
    Interactive Face Animation - Comprehensive Environment (iFACE) is a general-purpose software framework that encapsulates the functionality of “face multimedia object” for a variety of interactive applications such as games and online services. iFACE exposes programming interfaces and provides authoring and scripting tools to design a face object, define its behaviours, and animate it through static or interactive situations. The framework is based on four parameterized spaces of Geometry, Mood, Personality, and Knowledge that together form the appearance and behaviour of the face object. iFACE can function as a common “face engine” for design and runtime environments to simplify the work of content and software developers

    A framework for automatic and perceptually valid facial expression generation

    Get PDF
    Facial expressions are facial movements reflecting the internal emotional states of a character or in response to social communications. Realistic facial animation should consider at least two factors: believable visual effect and valid facial movements. However, most research tends to separate these two issues. In this paper, we present a framework for generating 3D facial expressions considering both the visual the dynamics effect. A facial expression mapping approach based on local geometry encoding is proposed, which encodes deformation in the 1-ring vector. This method is capable of mapping subtle facial movements without considering those shape and topological constraints. Facial expression mapping is achieved through three steps: correspondence establishment, deviation transfer and movement mapping. Deviation is transferred to the conformal face space through minimizing the error function. This function is formed by the source neutral and the deformed face model related by those transformation matrices in 1-ring neighborhood. The transformation matrix in 1-ring neighborhood is independent of the face shape and the mesh topology. After the facial expression mapping, dynamic parameters are then integrated with facial expressions for generating valid facial expressions. The dynamic parameters were generated based on psychophysical methods. The efficiency and effectiveness of the proposed methods have been tested using various face models with different shapes and topological representations

    FacEMOTE: Qualitative Parametric Modifiers for Facial Animations

    Get PDF
    We propose a control mechanism for facial expressions by applying a few carefully chosen parametric modifications to preexisting expression data streams. This approach applies to any facial animation resource expressed in the general MPEG-4 form, whether taken from a library of preset facial expressions, captured from live performance, or entirely manually created. The MPEG-4 Facial Animation Parameters (FAPs) represent a facial expression as a set of parameterized muscle actions, given as intensity of individual muscle movements over time. Our system varies expressions by changing the intensities and scope of sets of MPEG-4 FAPs. It creates variations in “expressiveness” across the face model rather than simply scale, interpolate, or blend facial mesh node positions. The parameters are adapted from the Effort parameters of Laban Movement Analysis (LMA); we developed a mapping from their values onto sets of FAPs. The FacEMOTE parameters thus perturb a base expression to create a wide range of expressions. Such an approach could allow real-time face animations to change underlying speech or facial expression shapes dynamically according to current agent affect or user interaction needs

    A Human Body Modelling System for Motion Studies

    Get PDF
    The need to visualize and interpret human body movement data from experiments and simulations has led to the development of a new, computerized, three-dimensional representation for the human body. Based on a skeleton of joints and segments, the model is manipulated by specifying joint positions with respect to arbitrary frames of reference. The external form is modelled as the union of overlapping spheres which define the surface of each segment. The properties of the segment and sphere model include: an ability to utilize any connected portion of the body in order to examine selected movements without computing movements of undesired parts , a naming mechanism for describing parts within a segment, and a collision detection algorithm for finding contacts or illegal intersections of the body with itself or other objects. One of the most attractive features of this model is the simple hidden surface removal algorithm. Since spheres always project onto a plane as disks, a solid, shaded, realistically-formed raster display of the model can be efficiently generated by a simple overlaying of the disks from the backmost to the frontmost. A three-dimensional animated display on a line-drawing device is based on drawing circles. Examples of the three-dimensional figure as viewed on these different display media are presented. The flexibility of the representation is enhanced by a method for decomposing an object into spheres, given one or more of its cross-sections, so that the data input problem is significantly simplified, should other models be desired. Using data from existing simulation programs, movements of the model have been computed and displayed, yielding very satisfactory results. Various transportation related applications are proposed

    A movable jaw model for the human face

    Get PDF
    Cataloged from PDF version of article.Although there is a great deal of work on facial animation, there is not much research on the effect of jaw motion on the movement of the face. The complex nature of the jaw bones makes it difficult to implement all the motions the jaw can do. The human jaw has two widely separated identical joints behaving like a single joint. Widely separated joints of the mandible (lower jaw bone) allow it to translate in any direction and/or rotate about any axis in three-dimensional space although its movements are somewhat restricted by physical constraints and patterns of muscle activity. A simplified jaw model which covers the major movements of the jaw is proposed in this paper. The lower jaw in the model can rotate around the axis connecting the two ends of the jaw and make small translational motions in any direction in 3-D space. The face is modeled as a two layer model which is attached to the jaw. The inner layer of the face moves kinematically as dictated by the jaw. The outer layer moves with the effect of the springs connecting it to the inner layer. The motion of the outer layer is calculated using spring-mass equations. Eating and chewing actions are simulated as applications of the model. (C) 1997 Elsevier Science Ltd. All rights reserved

    A 3d talking head for mobile devices based on unofficial ios webgl support

    Get PDF
    In this paper we present the implementation of a WebGL Talking Head for iOS mobile devices (Apple iPhone and iPad). It works on standard MPEG-4 Facial Animation Parameters (FAPs) and speaks with the Italian version of FESTIVAL TTS. It is totally based on true real human data. The 3D kinematics information are used to create lips articulatory model and to drive directly the talking face, generating human facial movements. In the last year we developed the WebGL version of the avatar. WebGL, which is 3D graphic for the web, is currently supported in the major web browsers for desktop computers. No official support has been given for mobile device main platforms yet, although the Firefox beta version enables it on android phones. Starting from iOS 5 WebGL is enabled only for the advertisement library class (which is intended for placing ad-banners in applications). We have been able to use this feature to visualize and animate our WebGL talking head

    iFace: Facial Expression Training System

    Get PDF
    corecore