1,039 research outputs found

    A natural interface for remote operation of underwater robots

    Get PDF
    Nowadays, an increasing need of intervention robotic systems can be observed in all kind of hazardous environments. In all these intervention systems, the human expert continues playing a central role from the decision-making point of view. For instance, in underwater domains, when manipulation capabilities are required, only Remote Operated Vehicles, commercially available, can be used, normally using master-slave architectures and relaying all the responsibility in the pilot. Thus, the role played by human- machine interfaces represents a crucial point in current intervention systems. This paper presents a User Interface Abstraction Layer and introduces a new procedure to control an underwater robot vehicle by using a new intuitive and immersive interface, which will show to the user only the most relevant information about the current mission. We conducted an experiment and found that the highest user preference and performance was in the immersive condition with joystick navigation.This research was partly supported by Spanish Ministry of Research and Innovation DPI2011-27977-C03 (TRITON Project)

    Design and evaluation of a natural interface for remote operation of underwater roter

    Get PDF
    Nowadays, an increasing need of intervention robotic systems can be observed in all kind of hazardous environments. In all these intervention systems, the human expert continues playing a central role from the decision making point of view. For instance, in underwater domains, when manipulation capabilities are required, only Remote Operated Vehicles, commercially available, can be used, normally using master-slave architectures and relaying all the responsibility in the pilot. Thus, the role played by human- machine interfaces represents a crucial point in current intervention systems. This paper presents a User Interface Abstraction Layer and introduces a new procedure to control an underwater robot vehicle by using a new intuitive and immersive interface, which will show to the user only the most relevant information about the current mission. Finally, some experiments have been carried out to compare a traditional setup and the new procedure, demonstrating reliability and feasibility of our approach.This research was partly supported by Spanish Ministry of Research and Innovation DPI2011-27977-C03 (TRITON Project)

    Exploring 3D Chemical Plant Using VRML

    Get PDF
    The research project focused on how virtual reality could create an immersive environment and improve in designing a chemical plant. The main problem is the difficulties in designing chemical plant since 2D plant layout cannot provide the real walking-through. The aim of this project is to develop and design 3D Chemical Plant which allows users to explore the virtual plant environment freely. The objectives of this project are to design and develop 3D Chemical Plant in the virtual environment; to enable user to walkthrough the chemical plant; and at the same time evaluate the effectiveness of the implementation of 3D Chemical Plant. In completion the project, the framework used is based on the waterfall modeling theory. This study also examines the structure and existing use of VRML (International standard for 3D modelling on the internet) in constmction and architectural practice as a means of investigating its role and potential for extensible construction information visualization in chemical plant. The phases involved in the framework used for project development is the initiation phase, design specification, project development, integration and testing and lastly project implementation. Developments tools have been used in the project are VRML and 3D Max 6. As a result from the evaluation conducted, the mean of 3.5 from level of satisfaction ranking shows that mostly the evaluators are satisfied with the project and feel that the realism of 3D chemical plant and suitability of color and textures will improve the designing of chemical plant in virtual environment. As conclusion, the research project show that VR!VE are very useful and give a good impact for the chemical Engineer in designing a chemical plant

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities

    A Testing and Experimenting Environment for Microscopic Traffic Simulation Utilizing Virtual Reality and Augmented Reality

    Get PDF
    Microscopic traffic simulation (MTS) is the emulation of real-world traffic movements in a virtual environment with various traffic entities. Typically, the movements of the vehicles in MTS follow some predefined algorithms, e.g., car-following models, lane changing models, etc. Moreover, existing MTS models only provide a limited capability of two- and/or three-dimensional displays that often restrict the user’s viewpoint to a flat screen. Their downscaled scenes neither provide a realistic representation of the environment nor allow different users to simultaneously experience or interact with the simulation model from different perspectives. These limitations neither allow the traffic engineers to effectively disseminate their ideas to various stakeholders of different backgrounds nor allow the analysts to have realistic data about the vehicle or pedestrian movements. This dissertation intends to alleviate those issues by creating a framework and a prototype for a testing environment where MTS can have inputs from user-controlled vehicles and pedestrians to improve their traffic entity movement algorithms as well as have an immersive M3 (multi-mode, multi-perspective, multi-user) visualization of the simulation using Virtual Reality (VR) and Augmented Reality (AR) technologies. VR environments are created using highly realistic 3D models and environments. With modern game engines and hardware available on the market, these VR applications can provide a highly realistic and immersive experience for a user. Different experiments performed by real users in this study prove that utilizing VR technology for different traffic related experiments generated much more favorable results than the traditional displays. Moreover, using AR technologies for pedestrian studies is a novel approach that allows a user to walk in the real world and the simulation world at a one-to-one scale. This capability opens a whole new avenue of user experiment possibilities. On top of that, the in-environment communication chat system will allow researchers to perform different Advanced Driver Assistance System (ADAS) studies without ever needing to leave the simulation environment. Last but not least, the distributed nature of the framework enables users to participate from different geographic locations with their choice of display device (desktop, smartphone, VR, or AR). The prototype developed for this dissertation is readily available on a test webpage, and a user can easily download the prototype application without needing to install anything. The user also can run the remote MTS server and then connect their client application to the server

    Visualization and Analysis Tools for Neuronal Tissue

    Get PDF
    The complex nature of neuronal cellular and circuit structure poses challenges for understanding tissue organization. New techniques in electron microscopy allow for large datasets to be acquired from serial sections of neuronal tissue. These techniques reveal all cells in an unbiased fashion, so their segmentation produces complex structures that must be inspected and analyzed. Although several software packages provide 3D representations of these structures, they are limited to monoscopic projection, and are tailored to the visualization of generic 3D data. On the other hand, stereoscopic display has been shown to improve the immersive experience, with significant gains in understanding spatial relationships and identifying important features. To leverage those benefits, we have developed a 3D immersive virtual reality data display system that besides presenting data visually allows augmenting and interacting with them in a form that facilitates human analysis.;To achieve a useful system for neuroscientists, we have developed the BrainTrek system, which is a suite of software applications suited for the organization, rendering, visualization, and modification of neuron model scenes. A middle cost point CAVE system provides high vertex count rendering of an immersive 3D environment. A standard head- and wand-tracking allows movement control and modification of the scene via the on-screen, 3D menu, while a tablet touch screen provides multiple navigation modes and a 2D menu. Graphic optimization provides theoretically limitless volumes to be presented and an on-screen mini-map allows users to quickly orientate themselves. A custom voice note-taking mechanism has been installed, allowing scenes to be described and revisited. Finally, ray-casting support allows numerous analytical features, including 3D distance and volume measurements, computation and presentation of statistics, and point-and-click retrieval and presentation of raw electron microscopy data. The extension of this system to the Unity3D platform provides a low-cost alternative to the CAVE. This allows users to visualize, explore, and annotate 3D cellular data in multiple platforms and modalities, ranging from different operating systems, different hardware platforms (e.g., tablets, PCs, or stereo head-mounted displays), to operating in an online or off-line fashion. Such approach has the potential to not only address visualization and analysis needs of neuroscientists, but also to become a tool for educational purposes, as well as for crowdsourcing upcoming needs for sheer amounts of neuronal data annotation

    The matrix revisited: A critical assessment of virtual reality technologies for modeling, simulation, and training

    Get PDF
    A convergence of affordable hardware, current events, and decades of research have advanced virtual reality (VR) from the research lab into the commercial marketplace. Since its inception in the 1960s, and over the next three decades, the technology was portrayed as a rarely used, high-end novelty for special applications. Despite the high cost, applications have expanded into defense, education, manufacturing, and medicine. The promise of VR for entertainment arose in the early 1990\u27s and by 2016 several consumer VR platforms were released. With VR now accessible in the home and the isolationist lifestyle adopted due to the COVID-19 global pandemic, VR is now viewed as a potential tool to enhance remote education. Drawing upon over 17 years of experience across numerous VR applications, this dissertation examines the optimal use of VR technologies in the areas of visualization, simulation, training, education, art, and entertainment. It will be demonstrated that VR is well suited for education and training applications, with modest advantages in simulation. Using this context, the case is made that VR can play a pivotal role in the future of education and training in a globally connected world

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    Integration of multiple data types in 3-D immersive virtual reality (VR) environments

    Get PDF
    Intelligent sensors have begun to play a key part in the monitoring and maintenance of complex infrastructures. Sensors have the capability not only to provide raw data, but also provide information by indicating the reliability of the measurements. The effect of this added information is a voluminous increase in the total data that is gathered. If an operator is required to perceive the state of a complex system, novel methods must be developed for sifting through enormous data sets. Virtual reality (VR) platforms are proposed as ideal candidates for performing this task-- a virtual world will allow the user to experience a complex system that is gathering a multitude of sensor data and are referred as Integrated Awareness models. This thesis presents techniques for visualizing such multiple data sets, specifically - graphical, measurement and health data inside a 3-D VR environment. The focus of this thesis is to develop pathways to generate the required 3-D models without sacrificing visual fidelity. The tasks include creating the visual representation, integrating multi-sensor measurements, creating user-specific visualizations and a performance evaluation of the completed virtual environment
    • …
    corecore