2,922 research outputs found

    A Cloud Based Disaster Management System

    Get PDF
    The combination of wireless sensor networks (WSNs) and 3D virtual environments opens a new paradigm for their use in natural disaster management applications. It is important to have a realistic virtual environment based on datasets received from WSNs to prepare a backup rescue scenario with an acceptable response time. This paper describes a complete cloud-based system that collects data from wireless sensor nodes deployed in real environments and then builds a 3D environment in near real-time to reflect the incident detected by sensors (fire, gas leaking, etc.). The system’s purpose is to be used as a training environment for a rescue team to develop various rescue plans before they are applied in real emergency situations. The proposed cloud architecture combines 3D data streaming and sensor data collection to build an efficient network infrastructure that meets the strict network latency requirements for 3D mobile disaster applications. As compared to other existing systems, the proposed system is truly complete. First, it collects data from sensor nodes and then transfers it using an enhanced Routing Protocol for Low-Power and Lossy Networks (RLP). A 3D modular visualizer with a dynamic game engine was also developed in the cloud for near-real time 3D rendering. This is an advantage for highly-complex rendering algorithms and less powerful devices. An Extensible Markup Language (XML) atomic action concept was used to inject 3D scene modifications into the game engine without stopping or restarting the engine. Finally, a multi-objective multiple traveling salesman problem (AHP-MTSP) algorithm is proposed to generate an efficient rescue plan by assigning robots and multiple unmanned aerial vehicles to disaster target locations, while minimizing a set of predefined objectives that depend on the situation. The results demonstrate that immediate feedback obtained from the reconstructed 3D environment can help to investigate what–if scenarios, allowing for the preparation of effective rescue plans with an appropriate management effort.info:eu-repo/semantics/publishedVersio

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    Real-Time Affective Support to Promote Learner’s Engagement

    Get PDF
    abstract: Research has shown that the learning processes can be enriched and enhanced with the presence of affective interventions. The goal of this dissertation was to design, implement, and evaluate an affective agent that provides affective support in real-time in order to enrich the student’s learning experience and performance by inducing and/or maintaining a productive learning path. This work combined research and best practices from affective computing, intelligent tutoring systems, and educational technology to address the design and implementation of an affective agent and corresponding pedagogical interventions. It included the incorporation of the affective agent into an Exploratory Learning Environment (ELE) adapted for this research. A gendered, three-dimensional, animated, human-like character accompanied by text- and speech-based dialogue visually represented the proposed affective agent. The agent’s pedagogical interventions considered inputs from the ELE (interface, model building, and performance events) and from the user (emotional and cognitive events). The user’s emotional events captured by biometric sensors and processed by a decision-level fusion algorithm for a multimodal system in combination with the events from the ELE informed the production-rule-based behavior engine to define and trigger pedagogical interventions. The pedagogical interventions were focused on affective dimensions and occurred in the form of affective dialogue prompts and animations. An experiment was conducted to assess the impact of the affective agent, Hope, on the student’s learning experience and performance. In terms of the student’s learning experience, the effect of the agent was analyzed in four components: perception of the instructional material, perception of the usefulness of the agent, ELE usability, and the affective responses from the agent triggered by the student’s affective states. Additionally, in terms of the student’s performance, the effect of the agent was analyzed in five components: tasks completed, time spent solving a task, planning time while solving a task, usage of the provided help, and attempts to successfully complete a task. The findings from the experiment did not provide the anticipated results related to the effect of the agent; however, the results provided insights to improve diverse components in the design of affective agents as well as for the design of the behavior engines and algorithms to detect, represent, and handle affective information.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    The Medical Exploration Toolkit: An Efficient Support for Visual Computing in Surgical Planning and Training

    Full text link

    Real-Time Storytelling with Events in Virtual Worlds

    Get PDF
    We present an accessible interactive narrative tool for creating stories among a virtual populace inhabiting a fully-realized 3D virtual world. Our system supports two modalities: assisted authoring where a human storyteller designs stories using a storyboard-like interface called CANVAS, and exploratory authoring where a human author experiences a story as it happens in real-time and makes on-the-fly narrative trajectory changes using a tool called Storycraft. In both cases, our system analyzes the semantic content of the world and the narrative being composed, and provides automated assistance such as completing partially-specified stories with causally complete sequences of intermediate actions. At its core, our system revolves around events -â?? pre-authored multi-actor task sequences describing interactions between groups of actors and props. These events integrate complex animation and interaction tasks with precision control and expose them as atoms of narrative significance to the story direction systems. Events are an accessible tool and conceptual metaphor for assembling narrative arcs, providing a tightly-coupled solution to the problem of converting author intent to real-time animation synthesis. Our system allows simple and straightforward macro- and microscopic control over large numbers of virtual characters with diverse and sophisticated behavior capabilities, and reduces the complicated action space of an interactive narrative by providing analysis and user assistance in the form of semi-automation and recommendation services

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    AN EXAMINATION OF THE IMPACT OF COMPUTER-BASED ANIMATIONS AND VISUALIZATION SEQUENCE ON LEARNERS' UNDERSTANDING OF HADLEY CELLS IN ATMOSPHERIC CIRCULATION

    Get PDF
    Research examining animation use for student learning has been conducted in the last two decades across a multitude of instructional environments and content areas. The extensive construction and implementation of animations in learning resulted from the availability of powerful computing systems and the perceived advantages the novel medium offered to deliver dynamic representations of complex systems beyond the human perceptual scale. Animations replaced or supplemented text and static diagrams of system functioning and were predicted to significantly improve learners' conceptual understanding of target systems. However, subsequent research has not consistently discovered affordances to understanding, and in some cases, has actually shown that animation use is detrimental to system understanding especially for content area novices (Lowe 2004; Mayer et al. 2005). This study sought to determine whether animation inclusion in an authentic learning context improved student understanding for an introductory earth science concept, Hadley Cell circulation. In addition, the study sought to determine whether the timing of animation examination improved conceptual understanding. A quasi-experimental pretest posttest design administered in an undergraduate science lecture and laboratory course compared four different learning conditions: text and static diagrams with no animation use, animation use prior to the examination of text and static diagrams, animation use following the examination of text and static diagrams, and animation use during the examination of text and static diagrams. Additionally, procedural data for a sample of three students in each condition were recorded and analyzed through the lens of self regulated learning (SRL) behaviors. The aim was to determine whether qualitative differences existed between cognitive processes employed. Results indicated that animation use did not improve understanding across all conditions. However learners able to employ animations while reading and examining the static diagrams and to a lesser extent, after reading the system description, showed evidence of higher levels of system understanding on posttest assessments. Procedural data found few differences between groups with one exception---learners given access to animations during the learning episode chose to examine and coordinate the representations more frequently. These results indicated a new finding from the use of animation, a sequence effect to improve understanding of Hadley Cells in atmospheric circulation

    Maine Educators Describe Innovative Technology Uses in K-12 Education

    Get PDF
    • …
    corecore