22 research outputs found

    Integrating realistic human group behaviors into a networked 3D virtual environment

    Get PDF
    Distributed Interactive Simulation DIS-Java-VRML Working Group. Includes supplementary material provided from the contents of a CD-Rom issued containing the work of all three Working Group members and all supplementary material, in compressed format.Virtual humans operating inside large-scale virtual environments (VE) are typically controlled as single entities. Coordination of group activity and movement is usually the responsibility of their real world human controllers. Georeferencing coordinate systems, single-precision versus double-precision number representation and network delay requirements make group operations difficult. Mounting multiple humans inside shared or single vehicles, (i.e. air-assault operations, mechanized infantry operations, or small boat/riverine operations) with high fidelity is often impossible. The approach taken in this thesis is to reengineer the DIS-Java-VRML Capture the Flag game geolocated at Fort Irwin, California to allow the inclusion of human entities. Human operators are given the capability of aggregating or mounting nonhuman entities for coordinated actions. Additionally, rapid content creation of human entities is addressed through the development of a native tag set for the Humanoid Animation (H-Anim) 1.1 Specification in Extensible 3D (X3D). Conventions are demonstrated for integrating the DIS-Java-VRML and H-Anim draft standards using either VRML97 or X3D encodings. The result of this work is an interface to aggregate and control articulated humans using an existing model with a standardized motion library in a networked virtual environment. Virtual human avatars can be mounted and unmounted from aggregation entities. Simple demonstration examples show coordinated tactical maneuver among multiple humans with and without vehicles. Live 3D visualization of animated humanoids on realistic terrain is then portrayed inside freely available web browsers.Approved for public release; distribution is unlimited

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Virtual human modelling and animation for real-time sign language visualisation

    Get PDF
    >Magister Scientiae - MScThis thesis investigates the modelling and animation of virtual humans for real-time sign language visualisation. Sign languages are fully developed natural languages used by Deaf communities all over the world. These languages are communicated in a visual-gestural modality by the use of manual and non-manual gestures and are completely di erent from spoken languages. Manual gestures include the use of hand shapes, hand movements, hand locations and orientations of the palm in space. Non-manual gestures include the use of facial expressions, eye-gazes, head and upper body movements. Both manual and nonmanual gestures must be performed for sign languages to be correctly understood and interpreted. To e ectively visualise sign languages, a virtual human system must have models of adequate quality and be able to perform both manual and non-manual gesture animations in real-time. Our goal was to develop a methodology and establish an open framework by using various standards and open technologies to model and animate virtual humans of adequate quality to e ectively visualise sign languages. This open framework is to be used in a Machine Translation system that translates from a verbal language such as English to any sign language. Standards and technologies we employed include H-Anim, MakeHuman, Blender, Python and SignWriting. We found it necessary to adapt and extend H-Anim to e ectively visualise sign languages. The adaptations and extensions we made to H-Anim include imposing joint rotational limits, developing exible hands and the addition of facial bones based on the MPEG-4 Facial De nition Parameters facial feature points for facial animation. By using these standards and technologies, we found that we could circumvent a few di cult problems, such as: modelling high quality virtual humans; adapting and extending H-Anim; creating a sign language animation action vocabulary; blending between animations in an action vocabulary; sharing animation action data between our virtual humans; and e ectively visualising South African Sign Language.South Afric

    Evaluating Extensible 3D (X3D) Graphics For Use in Software Visualisation

    No full text
    3D web software visualisation has always been expensive, special purpose, and hard to program. Most of the technologies used require large amounts of scripting, are not reliable on all platforms, are binary formats, or no longer maintained. We can make end-user web software visualisation of object-oriented programs cheap, portable, and easy by using Extensible (X3D) 3D Graphics, which is a new open standard. In this thesis we outline our experience with X3D and discuss the suitability of X3D as an output format for software visualisation

    Virtual Movement from Natural Language Text

    Get PDF
    It is a challenging task for machines to follow a textual instruction. Properly understanding and using the meaning of the textual instruction in some application areas, such as robotics, animation, etc. is very difficult for machines. The interpretation of textual instructions for the automatic generation of the corresponding motions (e.g. exercises) and the validation of these movements are difficult tasks. To achieve our initial goal of having machines properly understand textual instructions and generate some motions accordingly, we recorded five different exercises in random order with the help of seven amateur performers using a Microsoft Kinect device. During the recording, we found that the same exercise was interpreted differently by each human performer even though they were given identical textual instructions. We performed a quality assessment study based on the derived data using a crowdsourcing approach. Later, we tested the inter-rater agreement for different types of visualization, and found the RGB-based visualization showed the best agreement among the annotatorsa animation with a virtual character standing in second position. In the next phase we worked with physical exercise instructions. Physical exercise is an everyday activity domain in which textual exercise descriptions are usually focused on body movements. Body movements are considered to be a common element across a broad range of activities that are of interest for robotic automation. Our main goal is to develop a text-to-animation system which we can use in different application areas and which we can also use to develop multiple-purpose robots whose operations are based on textual instructions. This system could be also used in different text to scene and text to animation systems. To generate a text-based animation system for physical exercises the process requires the robot to have natural language understanding (NLU) including understanding non-declarative sentences. It also requires the extraction of semantic information from complex syntactic structures with a large number of potential interpretations. Despite a comparatively high density of semantic references to body movements, exercise instructions still contain large amounts of underspecified information. Detecting, and bridging and/or filling such underspecified elements is extremely challenging when relying on methods from NLU alone. However, humans can often add such implicit information with ease due to its embodied nature. We present a process that contains the combination of a semantic parser and a Bayesian network. In the semantic parser, the system extracts all the information present in the instruction to generate the animation. The Bayesian network adds some brain to the system to extract the information that is implicit in the instruction. This information is very important for correctly generating the animation and is very easy for a human to extract but very difficult for machines. Using crowdsourcing, with the help of human brains, we updated the Bayesian network. The combination of the semantic parser and the Bayesian network explicates the information that is contained in textual movement instructions so that an animation execution of the motion sequences performed by a virtual humanoid character can be rendered. To generate the animation from the information we basically used two different types of Markup languages. Behaviour Markup Language is used for 2D animation. Humanoid Animation uses Virtual Reality Markup Language for 3D animation

    Donald P. Brutzman: a biography

    Get PDF
    Design and implement large-scale networked underwater virtual worlds using Web-accessible 3D graphics and network streams. Integrate sensors, models and datasets for real-time interactive use by scientists, underwater robots, ships and students of all ages

    Platform Independent Real-Time X3D Shaders and their Applications in Bioinformatics Visualization

    Get PDF
    Since the introduction of programmable Graphics Processing Units (GPUs) and procedural shaders, hardware vendors have each developed their own individual real-time shading language standard. None of these shading languages is fully platform independent. Although this real-time programmable shader technology could be developed into 3D application on a single system, this platform dependent limitation keeps the shader technology away from 3D Internet applications. The primary purpose of this dissertation is to design a framework for translating different shader formats to platform independent shaders and embed them into the eXtensible 3D (X3D) scene for 3D web applications. This framework includes a back-end core shader converter, which translates shaders among different shading languages with a middle XML layer. Also included is a shader library containing a basic set of shaders that developers can load and add shaders to. This framework will then be applied to some applications in Biomolecular Visualization

    Declarative modeling based on knowledge

    Get PDF
    Les nouvelles technologies de l'image 3D permettent la création de mondes virtuels et des créatures qui les peuplent avec un tel niveau de détails, que pour les effets spéciaux de cinéma, il est difficile de distinguer les éléments sont générés par ordinateur. Cependant, cette technologie est dans les mains habiles de designers, artistes et programmeurs, pour lesquels il faut des semaines à plusieurs années pour se former aux outils et obtenir ces résultats. La Modélisation Déclarative est une méthode qui permet de créer des modèles en fournissant les propriétés donnant la description des composants du modèle. Appliquée à l’infographie, la modélisation déclarative est utilisée pour générer le monde virtuel, en déterminant le contexte nécessaire à l'animation et à la conception de la scène, en calculant la position de chaque objet relativement aux relations spatiales, et en générant le rendu de la scène, utilisé par une système d'animation et de visualisation. Ce mémoire présente les travaux de recherche consacrés à l'utilisation de la modélisation déclarative pour créer des environnements virtuels, en tirant partie des connaissances sur le contexte de la scène. Les connaissances sont utilisées afin de faciliter la tâche de description, en automatisant ce qui peut être déduit, comme les usages et les fonctionnalités habituelles. Elles sont également fondamentales pour que le résultat produit corresponde le mieux possible à ce qui est attendu par le concepteur à partir de la description fournie. Les connaissances sont enfin nécessaires pour faciliter la transition entre le modèle de données et l'architecture qui aura la charge d'animer et de faire évoluer la scène.Modern technology has allowed the creation and presentation or VirtualWorlds and creatures with such a high level of detail, that when used in movies, sometimes it is difficult to tell which elements arecomputer-generated and which not. Also, video-games had reached a level close to photographicrealism. However, such technology is at the hands of skillful designer, artists, and programmers, for whom ittakes from weeks to years to complete these results.Declarative modeling is a method which allows to create models specifying just a few properties for the model’s components. Applied to VW creation, declarative modeling can be used to construct theVW, establishing the layout for the objects, generating the necessary context to provide animation and scene design, and generate the outputs used by a visualization/animation system.This document present a research devoted to explore the use of declarative modeling to create VirtualEnvironments, using knowledge exploitation to support the process and ease the transition from the data model to an underlaying architecture which take the task of animating and evolving the scene

    Semantics for virtual humans

    Get PDF
    Population of Virtual Worlds with Virtual Humans is increasing rapidly by people who want to create a virtual life parallel to the real one (i.e. Second Life). The evolution of technology is smoothly providing the necessary elements to increase realism within these virtual worlds by creating believable Virtual Humans. However, creating the amount of resources needed to succeed this believability is a difficult task, mainly because of the complexity of the creation process of Virtual Humans. Even though there are many existing available resources, their reusability is difficult because there is not enough information provided to evaluate if a model contains the desired characteristics to be reused. Additionally, the knowledge involved in the creation of Virtual Humans is not well known, nor well disseminated. There are several different creation techniques, different software components, and several processes to carry out before having a Virtual Human capable of populating a virtual environment. The creation of Virtual Humans involves: a geometrical representation with an internal control structure, the motion synthesis with different animation techniques, higher level controllers and descriptors to simulate human-like behavior such individuality, cognition, interaction capabilities, etc. All these processes require the expertise from different fields of knowledge such as mathematics, artificial intelligence, computer graphics, design, etc. Furthermore, there is neither common framework nor common understanding of how elements involved in the creation, development, and interaction of Virtual Humans features are done. Therefore, there is a need for describing (1) existing resources, (2) Virtual Human's composition and features, (3) a creation pipeline and (4) the different levels/fields of knowledge comprehended. This thesis presents an explicit representation of the Virtual Humans and their features to provide a conceptual framework that will interest to all people involved in the creation and development of these characters. This dissertation focuses in a semantic description of Virtual Humans. The creation of a semantic description involves gathering related knowledge, agreement among experts in the definition of concepts, validation of the ontology design, etc. In this dissertation all these procedures are presented, and an Ontology for Virtual Humans is described in detail together with the validations that conducted to the resulted ontology. The goal of creating such ontology is to promote reusability of existing resources; to create a shared knowledge of the creation and composition of Virtual Humans; and to support new research of the fields involved in the development of believable Virtual Humans and virtual environments. Finally, this thesis presents several developments that aim to demonstrate the ontology usability and reusability. These developments serve particularly to support the research on specialized knowledge of Virtual Humans, the population of virtual environments, and improve the believability of these characters
    corecore