20 research outputs found

    Subtle Sensing:Detecting Differences in the Flexibility of Virtually Simulated Molecular Objects

    Get PDF
    During VR demos we have performed over last few years, many participants (in the absence of any haptic feedback) have commented on their perceived ability to 'feel' differences between simulated molecular objects. The mechanisms for such 'feeling' are not entirely clear: observing from outside VR, one can see that there is nothing physical for participants to 'feel'. Here we outline exploratory user studies designed to evaluate the extent to which participants can distinguish quantitative differences in the flexibility of VR-simulated molecular objects. The results suggest that an individual's capacity to detect differences in molecular flexibility is enhanced when they can interact with and manipulate the molecules, as opposed to merely observing the same interaction. Building on these results, we intend to carry out further studies investigating humans' ability to sense quantitative properties of VR simulations without haptic technology

    Sampling molecular conformations and dynamics in a multiuser virtual reality framework

    Get PDF
    Copyright © 2018 The Authors, some rights reserved. We describe a framework for interactive molecular dynamics in a multiuser virtual reality (VR) environment, combining rigorous cloud-mounted atomistic physics simulations with commodity VR hardware, which we have made accessible to readers (see isci.itch.io/nsb-imd). It allows users to visualize and sample, with atomic-level precision, the structures and dynamics of complex molecular structures “on the fly” and to interact with other users in the same virtual environment. A series of controlled studies, in which participants were tasked with a range of molecular manipulation goals (threading methane through a nanotube, changing helical screw sense, and tying a protein knot), quantitatively demonstrate that users within the interactive VR environment can complete sophisticated molecular modeling tasks more quickly than they can using conventional interfaces, especially for molecular pathways and structural transitions whose conformational choreographies are intrinsically three-dimensional. This framework should accelerate progress in nanoscale molecular engineering areas including conformational mapping, drug development, synthetic biology, and catalyst design. More broadly, our findings highlight the potential of VR in scientific domains where three-dimensional dynamics matter, spanning research and education

    Audio Focus: Interactive spatial sound coupled with haptics to improve sound source location in poor visibility

    Get PDF
    International audienceIn an effort to simplify human resource management and reduce costs, control towers are now more and more designed to not be implanted directly on the airport but remotely. This concept, known as Remote Control Tower, offers a “digital” working context because the view on the runways is broadcast remotely via cameras, which are located on the physical airport. This offers researchers and engineers the possibility to develop novel interaction techniques. But this technology relies on the sense of sight, which is largely used to give the operator information and interaction, and which is now becoming overloaded. In this paper, we focus on the design and the testing of new interaction forms that rely on the human senses of hearing and touch. More precisely, our study aims at quantifying the contribution of a multimodal interaction technique based on spatial sound and vibrotactile feedback to improve aircraft location. Applied to Remote Tower environment, the final purpose is to enhance Air Traffic Controller's perception and increase safety. Three different interaction modalities have been compared by involving 22 Air Traffic Controllers in a simulated environment. The experimental task consisted in locating aircraft in different airspace positions by using the senses of hearing and touch through two visibility conditions. In the first modality (spatial sound only), the sound sources (e.g. aircraft) had the same amplification factor. In the second modality (called Audio Focus), the amplification factor of the sound sources located along the participant's head sagittal axis was increased, while the intensity of the sound sources located outside this axis was decreased. In the last modality, Audio Focus was coupled with vibrotactile feedback to indicate in addition the vertical positions of aircraft. Behavioral (i.e. accuracy and response times measurements) and subjective (i.e. questionnaires) results showed significantly higher performance in poor visibility when using Audio Focus interaction. In particular, interactive spatial sound gave the participants notably higher accuracy in degraded visibility compared to spatial sound only. This result was even better when coupled with vibrotactile feedback. Meanwhile, response times were significantly longer when using Audio Focus modality (coupled with vibrotactile feedback or not), while remaining acceptably short. This study can be seen as the initial step in the development of a novel interaction technique that uses sound as a means of location when the sense of sight alone is not enough

    The use of extended reality (XR), wearable, and haptic technologies for learning across engineering disciplines

    Get PDF
    According to the literature, the majority of engineering degrees are still taught using traditional 19th-century teaching and learning methods. Technology has recently been introduced to help improve the way these degrees are taught. Therefore, this chapter discusses the state-of-the-art and applications of extended reality (XR) technologies, including virtual and augmented realities (VR and AR), as well as wearable and haptic devices, in engineering education. These technologies have demonstrated great potential for application in engineering education and practice. Empirical research supports that pedagogical modalities provide additional channels for information presentation and delivery, facilitating the sensemaking process in learning and teaching. The integration of VR, AR, wearable, and haptic devices into the learning environments can enhance user engagement and create immersive user experiences. This chapter explores their potential for increasing learning-based applicability in teaching and learning engineering

    Multi-finger grasps in a dynamic environment

    Get PDF
    Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physic

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Interactive molecular docking with haptics and advanced graphics

    Get PDF
    Biomolecular interactions underpin many of the processes that make up life. Molecular docking is the study of these interactions in silico. Interactive docking applications put the user in control of the docking process, allowing them to use their knowledge and intuition to determine how molecules bind together. Interactive molecular docking applications often use haptic devices as a method of controlling the docking process. These devices allow the user to easily manipulate the structures in 3D space, whilst feeling the forces that occur in response to their manipulations. As a result of the force refresh rate requirements of haptic devices, haptic assisted docking applications are often limited, in that they model the interacting proteins as rigid, use low fidelity visualisations or require expensive propriety equipment to use. The research in this thesis aims to address some of these limitations. Firstly, the development of a visualisation algorithm capable of rendering a depiction of a deforming protein at an interactive refresh rate, with per-pixel shadows and ambient occlusion, is discussed. Then, a novel approach to modelling molecular flexibility whilst maintaining a stable haptic refresh rate is developed. Together these algorithms are presented within Haptimol FlexiDock, the first haptic-assisted molecular docking application to support receptor flexibility with high fidelity graphics, whilst also maintaining interactive refresh rates on both the haptic device and visual display. Using Haptimol FlexiDock, docking experiments were performed between two protein-ligand pairs: Maltodextrin Binding Protein and Maltose, and glutamine Binding Protein and Glucose. When the ligand was placed in its approximate binding site, the direction of over 80% of the intra-molecular movement aligned with that seen in the experimental structures. Furthermore, over 50% of the expected backbone motion was present in the structures generated with FlexiDock. Calculating the deformation of a biomolecule in real time, whilst maintaining an interactive refresh rate on the haptic device (> 500Hz) is a breakthrough in the field of interactive molecular docking, as, previous approaches either model protein flexibility, but fail to achieve the required haptic refresh rate, or do not consider biomolecular flexibility at all
    corecore