200 research outputs found

    A Cloud Based Disaster Management System

    Get PDF
    The combination of wireless sensor networks (WSNs) and 3D virtual environments opens a new paradigm for their use in natural disaster management applications. It is important to have a realistic virtual environment based on datasets received from WSNs to prepare a backup rescue scenario with an acceptable response time. This paper describes a complete cloud-based system that collects data from wireless sensor nodes deployed in real environments and then builds a 3D environment in near real-time to reflect the incident detected by sensors (fire, gas leaking, etc.). The system’s purpose is to be used as a training environment for a rescue team to develop various rescue plans before they are applied in real emergency situations. The proposed cloud architecture combines 3D data streaming and sensor data collection to build an efficient network infrastructure that meets the strict network latency requirements for 3D mobile disaster applications. As compared to other existing systems, the proposed system is truly complete. First, it collects data from sensor nodes and then transfers it using an enhanced Routing Protocol for Low-Power and Lossy Networks (RLP). A 3D modular visualizer with a dynamic game engine was also developed in the cloud for near-real time 3D rendering. This is an advantage for highly-complex rendering algorithms and less powerful devices. An Extensible Markup Language (XML) atomic action concept was used to inject 3D scene modifications into the game engine without stopping or restarting the engine. Finally, a multi-objective multiple traveling salesman problem (AHP-MTSP) algorithm is proposed to generate an efficient rescue plan by assigning robots and multiple unmanned aerial vehicles to disaster target locations, while minimizing a set of predefined objectives that depend on the situation. The results demonstrate that immediate feedback obtained from the reconstructed 3D environment can help to investigate what–if scenarios, allowing for the preparation of effective rescue plans with an appropriate management effort.info:eu-repo/semantics/publishedVersio

    Comparing and Evaluating Real Time Character Engines for Virtual Environments

    Get PDF
    As animated characters increasingly become vital parts of virtual environments, then the engines that drive these characters increasingly become vital parts of virtual environment software. This paper gives an overview of the state of the art in character engines, and proposes a taxonomy of the features that are commonly found in them. This taxonomy can be used as a tool for comparison and evaluation of different engines. In order to demonstrate this we use it to compare three engines. The first is Cal3D, the most commonly used open source engine. We also introduce two engines created by the authors, Piavca and HALCA. The paper ends with a brief discussion of some other popular engines

    A Dynamic Platform for Developing 3D Facial Avatars in a Networked Virtual Environment

    Get PDF
    Avatar facial expression and animation in 3D collaborative virtual environment (CVE) systems are reconstructed through a complex manipulation of muscles, bones, and wrinkles in 3D space. The need for a fast and easy reconstruction approach has emerged in the recent years due to its application in various domains: 3D disaster management, virtual shopping, and military training. In this work we proposed a new script language based on atomic parametric action to easily produce real-time facial animation. To minimize use of the game engine, we introduced script-based component where the user introduces simple short script fragments to feed the engine with a new animation on the fly. During runtime, when an embedded animation is required, an xml file is created and injected into the game engine without stopping or restarting the engine. The resulting animation method preserves the real-time performance because the modification occurs not through the modification of the 3D code that describes the CVE and its objects but rather through modification of the action scenario that rules when an animation happens or might happen in that specific situation

    A Cloud-Based Extensible Avatar For Human Robot Interaction

    Get PDF
    Adding an interactive avatar to a human-robot interface requires the development of tools that animate the avatar so as to simulate an intelligent conversation partner. Here we describe a toolkit that supports interactive avatar modeling for human-computer interaction. The toolkit utilizes cloud-based speech-to-text software that provides active listening, a cloud-based AI to generate appropriate textual responses to user queries, and a cloud-based text-to-speech generation engine to generate utterances for this text. This output is combined with a cloud-based 3D avatar animation synchronized to the spoken response. Generated text responses are embedded within an XML structure that allows for tuning the nature of the avatar animation to simulate different emotional states. An expression package controls the avatar's facial expressions. The introduced rendering latency is obscured through parallel processing and an idle loop process that animates the avatar between utterances. The efficiency of the approach is validated through a formal user study

    Anonymous Panda: preserving anonymity and expressiveness in online mental health platforms

    Get PDF
    Digital solutions that allow people to seek treatment, such as online psychological interventions and other technology-mediated therapies, have been developed to assist individuals with mental health disorders. Such approaches may raise privacy concerns about the use of people’s data and the safety of their mental health information. This work uses cutting-edge computer graphics technology to develop a novel system capable of increasing anonymity while maintaining expressiveness in computer-mediated mental health interventions. According to our preliminary findings, we were able to customize a realistic avatar using Live Link, Metahumans, and Unreal Engine 4 (UE4) with the same emotional depth as a real person. Furthermore, these findings showed that the virtual avatars’ inability to express themselves through hand motion gave the impression that they were acting in an unnatural way. By including the hand tracking feature using the Leap Motion Controller, we were able to improve our comprehension of the prospective use of ultra-realistic virtual human avatars in video conferencing therapy, i.e., both studies helped us understand how vital facial and body expressions are and how problematic their absence is in communicating with others.Soluções digitais que permitem às pessoas procurar tratamento, tais como terapias psicológicas online e outras terapias com recurso à tecnologia, foram desenvolvidas para ajudar indivíduos com distúrbios de saúde mental. Tais abordagens podem suscitar preocupações sobre a privacidade na utilização dos dados das pessoas e a segurança da informação sobre a sua saúde mental. Este trabalho utiliza tecnologia de ponta em computação gráfica para desenvolver um sistema inovador capaz de aumentar o anonimato, mantendo simultaneamente a expressividade nas inter venções de saúde mental mediadas por computador. Segundo os nossos resultados preliminares, conseguimos personalizar um avatar realista usando Live Link, Metahumans, e Unreal Engine 4 (UE4) com a mesma profundidade emocional que uma pessoa real. Além disso, os resultados mostraram que a incapacidade dos avatares virtuais de se expressarem através do movimento das mãos deu a impressão de que estavam a agir de uma forma pouco natural. Ao incluir a função de rastreio das mãos utilizando o Leap Motion Controller, conseguimos melhorar a nossa compreensão do uso prospetivo de avatares humanos virtuais e ultrarrealistas na terapia de videoconferência, ou seja, os estudos realizados ajudaram-nos a compreender como as expressões faciais e corporais são vitais e como a sua ausência é problemática na comunicação com os outros

    Machinima And Video-based Soft Skills Training

    Get PDF
    Multimedia training methods have traditionally relied heavily on video based technologies and significant research has shown these to be very effective training tools. However production of video is time and resource intensive. Machinima (pronounced \u27muh-sheen-eh-mah\u27) technologies are based on video gaming technology. Machinima technology allows video game technology to be manipulated into unique scenarios based on entertainment or training and practice applications. Machinima is the converting of these unique scenarios into video vignettes that tell a story. These vignettes can be interconnected with branching points in much the same way that education videos are interconnected as vignettes between decision points. This study addressed the effectiveness of machinima based soft-skills education using avatar actors versus the traditional video teaching application using human actors. This research also investigated the difference between presence reactions when using avatar actor produced video vignettes as compared to human actor produced video vignettes. Results indicated that the difference in training and/or practice effectiveness is statistically insignificant for presence, interactivity, quality and the skill of assertiveness. The skill of active listening presented a mixed result indicating the need for careful attention to detail in situations where body language and facial expressions are critical to communication. This study demonstrates that a significant opportunity exists for the exploitation of avatar actors in video based instruction

    On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

    Get PDF
    Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos
    corecore