2,934 research outputs found

    The virtual playground: an educational virtual reality environment for evaluating interactivity and conceptual learning

    Get PDF
    The research presented in this paper aims at investigating user interaction in immersive virtual learning environments (VLEs), focusing on the role and the effect of interactivity on conceptual learning. The goal has been to examine if the learning of young users improves through interacting in (i.e. exploring, reacting to, and acting upon) an immersive virtual environment (VE) compared to non interactive or non-immersive environments. Empirical work was carried out with more than 55 primary school students between the ages of 8 and 12, in different between-group experiments: an exploratory study, a pilot study, and a large-scale experiment. The latter was conducted in a virtual environment designed to simulate a playground. In this ‘Virtual Playground’, each participant was asked to complete a set of tasks designed to address arithmetical ‘fractions’ problems. Three different conditions, two experimental virtual reality (VR) conditions and a non-VR condition, that varied the levels of activity and interactivity, were designed to evaluate how children accomplish the various tasks. Pre-tests, post-tests, interviews, video, audio, and log files were collected for each participant, and analyzed both quantitatively and qualitatively. This paper presents a selection of case studies extracted from the qualitative analysis, which illustrate the variety of approaches taken by children in the VEs in response to visual cues and system feedback. Results suggest that the fully interactive VE aided children in problem solving but did not provide as strong evidence of conceptual change as expected; rather, it was the passive VR environment, where activity was guided by a virtual robot, that seemed to support student reflection and recall, leading to indications of conceptual change

    Real Virtuality: A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology

    Get PDF
    The goal of this article is to present a first list of ethical concerns that may arise from research and personal use of virtual reality (VR) and related technology, and to offer concrete recommendations for minimizing those risks. Many of the recommendations call for focused research initiatives. In the first part of the article, we discuss the relevant evidence from psychology that motivates our concerns. In Section “Plasticity in the Human Mind,” we cover some of the main results suggesting that one’s environment can influence one’s psychological states, as well as recent work on inducing illusions of embodiment. Then, in Section “Illusions of Embodiment and Their Lasting Effect,” we go on to discuss recent evidence indicating that immersion in VR can have psychological effects that last after leaving the virtual environment. In the second part of the article, we turn to the risks and recommendations. We begin, in Section “The Research Ethics of VR,” with the research ethics of VR, covering six main topics: the limits of experimental environments, informed consent, clinical risks, dual-use, online research, and a general point about the limitations of a code of conduct for research. Then, in Section “Risks for Individuals and Society,” we turn to the risks of VR for the general public, covering four main topics: long-term immersion, neglect of the social and physical environment, risky content, and privacy. We offer concrete recommendations for each of these 10 topics, summarized in Table 1

    Natural user interfaces for interdisciplinary design review using the Microsoft Kinect

    Get PDF
    As markets demand engineered products faster, waiting on the cyclical design processes of the past is not an option. Instead, industry is turning to concurrent design and interdisciplinary teams. When these teams collaborate, engineering CAD tools play a vital role in conceptualizing and validating designs. These tools require significant user investment to master, due to challenging interfaces and an overabundance of features. These challenges often prohibit team members from using these tools for exploring designs. This work presents a method allowing users to interact with a design using intuitive gestures and head tracking, all while keeping the model in a CAD format. Specifically, Siemens\u27 TeamcenterÂź Lifecycle Visualization Mockup (Mockup) was used to display design geometry while modifications were made through a set of gestures captured by a Microsoft KinectTM in real time. This proof of concept program allowed a user to rotate the scene, activate Mockup\u27s immersive menu, move the immersive wand, and manipulate the view based on head position. This work also evaluates gesture usability and task completion time for this proof of concept system. A cognitive model evaluation method was used to evaluate the premise that gesture-based user interfaces are easier to use and learn with regards to time than a traditional mouse and keyboard interface. Using a cognitive model analysis tool allowed the rapid testing of interaction concepts without the significant overhead of user studies and full development cycles. The analysis demonstrated that using the KinectTM is a feasible interaction mode for CAD/CAE programs. In addition, the analysis pointed out limitations in the gesture interfaces ability to compete time wise with easily accessible customizable menu options

    Design and evaluation of a natural interface for remote operation of underwater roter

    Get PDF
    Nowadays, an increasing need of intervention robotic systems can be observed in all kind of hazardous environments. In all these intervention systems, the human expert continues playing a central role from the decision making point of view. For instance, in underwater domains, when manipulation capabilities are required, only Remote Operated Vehicles, commercially available, can be used, normally using master-slave architectures and relaying all the responsibility in the pilot. Thus, the role played by human- machine interfaces represents a crucial point in current intervention systems. This paper presents a User Interface Abstraction Layer and introduces a new procedure to control an underwater robot vehicle by using a new intuitive and immersive interface, which will show to the user only the most relevant information about the current mission. Finally, some experiments have been carried out to compare a traditional setup and the new procedure, demonstrating reliability and feasibility of our approach.This research was partly supported by Spanish Ministry of Research and Innovation DPI2011-27977-C03 (TRITON Project)

    A natural interface for remote operation of underwater robots

    Get PDF
    Nowadays, an increasing need of intervention robotic systems can be observed in all kind of hazardous environments. In all these intervention systems, the human expert continues playing a central role from the decision-making point of view. For instance, in underwater domains, when manipulation capabilities are required, only Remote Operated Vehicles, commercially available, can be used, normally using master-slave architectures and relaying all the responsibility in the pilot. Thus, the role played by human- machine interfaces represents a crucial point in current intervention systems. This paper presents a User Interface Abstraction Layer and introduces a new procedure to control an underwater robot vehicle by using a new intuitive and immersive interface, which will show to the user only the most relevant information about the current mission. We conducted an experiment and found that the highest user preference and performance was in the immersive condition with joystick navigation.This research was partly supported by Spanish Ministry of Research and Innovation DPI2011-27977-C03 (TRITON Project)

    Enhancing Perceived Safety in Human–Robot Collaborative Construction Using Immersive Virtual Environments

    Full text link
    Advances in robotics now permit humans to work collaboratively with robots. However, humans often feel unsafe working alongside robots. Our knowledge of how to help humans overcome this issue is limited by two challenges. One, it is difficult, expensive and time-consuming to prototype robots and set up various work situations needed to conduct studies in this area. Two, we lack strong theoretical models to predict and explain perceived safety and its influence on human–robot work collaboration (HRWC). To address these issues, we introduce the Robot Acceptance Safety Model (RASM) and employ immersive virtual environments (IVEs) to examine perceived safety of working on tasks alongside a robot. Results from a between-subjects experiment done in an IVE show that separation of work areas between robots and humans increases perceived safety by promoting team identification and trust in the robot. In addition, the more participants felt it was safe to work with the robot, the more willing they were to work alongside the robot in the future.University of Michigan Mcubed Grant: Virtual Prototyping of Human-Robot Collaboration in Unstructured Construction EnvironmentsPeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/145620/1/You et al. forthcoming in AutCon.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/145620/4/You et al. 2018.pdfDescription of You et al. 2018.pdf : Published Versio

    Sensory Manipulation as a Countermeasure to Robot Teleoperation Delays: System and Evidence

    Full text link
    In the field of robotics, robot teleoperation for remote or hazardous environments has become increasingly vital. A major challenge is the lag between command and action, negatively affecting operator awareness, performance, and mental strain. Even with advanced technology, mitigating these delays, especially in long-distance operations, remains challenging. Current solutions largely focus on machine-based adjustments. Yet, there's a gap in using human perceptions to improve the teleoperation experience. This paper presents a unique method of sensory manipulation to help humans adapt to such delays. Drawing from motor learning principles, it suggests that modifying sensory stimuli can lessen the perception of these delays. Instead of introducing new skills, the approach uses existing motor coordination knowledge. The aim is to minimize the need for extensive training or complex automation. A study with 41 participants explored the effects of altered haptic cues in delayed teleoperations. These cues were sourced from advanced physics engines and robot sensors. Results highlighted benefits like reduced task time and improved perceptions of visual delays. Real-time haptic feedback significantly contributed to reduced mental strain and increased confidence. This research emphasizes human adaptation as a key element in robot teleoperation, advocating for improved teleoperation efficiency via swift human adaptation, rather than solely optimizing robots for delay adjustment.Comment: Submitted to Scientific Report
    • 

    corecore