24 research outputs found

    Integrating realistic human group behaviors into a networked 3D virtual environment

    Get PDF
    Distributed Interactive Simulation DIS-Java-VRML Working Group. Includes supplementary material provided from the contents of a CD-Rom issued containing the work of all three Working Group members and all supplementary material, in compressed format.Virtual humans operating inside large-scale virtual environments (VE) are typically controlled as single entities. Coordination of group activity and movement is usually the responsibility of their real world human controllers. Georeferencing coordinate systems, single-precision versus double-precision number representation and network delay requirements make group operations difficult. Mounting multiple humans inside shared or single vehicles, (i.e. air-assault operations, mechanized infantry operations, or small boat/riverine operations) with high fidelity is often impossible. The approach taken in this thesis is to reengineer the DIS-Java-VRML Capture the Flag game geolocated at Fort Irwin, California to allow the inclusion of human entities. Human operators are given the capability of aggregating or mounting nonhuman entities for coordinated actions. Additionally, rapid content creation of human entities is addressed through the development of a native tag set for the Humanoid Animation (H-Anim) 1.1 Specification in Extensible 3D (X3D). Conventions are demonstrated for integrating the DIS-Java-VRML and H-Anim draft standards using either VRML97 or X3D encodings. The result of this work is an interface to aggregate and control articulated humans using an existing model with a standardized motion library in a networked virtual environment. Virtual human avatars can be mounted and unmounted from aggregation entities. Simple demonstration examples show coordinated tactical maneuver among multiple humans with and without vehicles. Live 3D visualization of animated humanoids on realistic terrain is then portrayed inside freely available web browsers.Approved for public release; distribution is unlimited

    A dynamic three-dimensional network visualization program for integration into cyberciege and other network visualization scenarios

    Get PDF
    Detailed information and intellectual understanding of a network's topology and vulnerabilities is invaluable to better securing computer networks. Network protocol analyzers and intrusion detection systems can provide this additional information. In particular, game-based trainers, such as CyberCIEGE, have been shown to improve the level of training and understanding of network security professionals. This thesis' objective is to enhance these applications by developing NTAV3D, or, Network Topology and Attack Visualizer (Three Dimensional). NTAV3D is a tool that displays network topology, vulnerabilities, and attacks in an interactive, three dimensional environment. This augments the design and gameplay of CyberCIEGE by increasing gameplayer interaction and data display. Additionally, NTAV3D can be expanded to provide this capability to network analysis and intrusion detection tools. Furthermore, NTAV3D expands on ideas and results from related work of the best ways to visualize network topology, vulnerabilities, and attacks. NTAV3D was created using open-source software technologies including Xj3D, X3D, Java, and XML. It is also one of the first applications to be built with only the Xj3D toolkit. Therefore, the development process allowed evaluation of these technologies, resulting in recommendations for future improvements.http://archive.org/details/adynamicthreedim109453384US Navy (USN) authors.Approved for public release; distribution is unlimited

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    Interfaces for human-centered production and use of computer graphics assets

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    The use of mobile phones as service-delivery devices in sign language machine translation system

    Get PDF
    Masters of ScienceThis thesis investigates the use of mobile phones as service-delivery devices in a sign language machine translation system. Four sign language visualization methods were evaluated on mobile phones. Three of the methods were synthetic sign language visualization methods. Three factors were considered: the intelligibility of sign language, as rendered by the method; the power consumption; and the bandwidth usage associated with each method. The average intelligibility rate was 65%, with some methods achieving intelligibility rates of up to 92%. The average size was 162 KB and, on average, the power consumption increased to 180% of the idle state, across all methods. This research forms part of the Integration of Signed and Verbal Communication: South African Sign Language Recognition and Animation (SASL) project at the University of the Western Cape and serves as an integration platform for the group's research. In order to perform this research a machine translation system that uses mobile phones as service-delivery devices was developed as well as a 3D Avatar for mobile phones. It was concluded that mobile phones are suitable service-delivery platforms for sign language machine translation systems.South Afric

    Model-Driven Development of Interactive Multimedia Applications

    Get PDF
    The development of highly interactive multimedia applications is still a challenging and complex task. In addition to the application logic, multimedia applications typically provide a sophisticated user interface with integrated media objects. As a consequence, the development process involves different experts for software design, user interface design, and media design. There is still a lack of concepts for a systematic development which integrates these aspects. This thesis provides a model-driven development approach addressing this problem. Therefore it introduces the Multimedia Modeling Language (MML), a visual modeling language supporting a design phase in multimedia application development. The language is oriented on well-established software engineering concepts, like UML 2, and integrates concepts from the areas of multimedia development and model-based user interface development. MML allows the generation of code skeletons from the models. Thereby, the core idea is to generate code skeletons which can be directly processed in multimedia authoring tools. In this way, the strengths of both are combined: Authoring tools are used to perform the creative development tasks while models are used to design the overall application structure and to enable a well-coordinated development process. This is demonstrated using the professional authoring tool Adobe Flash. MML is supported by modeling and code generation tools which have been used to validate the approach over several years in various student projects and teaching courses. Additional prototypes have been developed to demonstrate, e.g., the ability to generate code for different target platforms. Finally, it is discussed how models can contribute in general to a better integration of well-structured software development and creative visual design
    corecore