8,677 research outputs found

    A mechatronic shape display based on auxetic materials

    Get PDF
    Shape displays enable people to touch simulated surfaces. A common architecture of such devices uses a mechatronic pin-matrix. Besides their complexity and high cost, these matrix displays suffer from sharp edges due to the discreet representation which reduces their ability to render a large continuous surface when sliding the hand. We propose using an engineered auxetic material actuated by a smaller number of motors. The material bends in multiple directions, feeling smooth and rigid to touch. A prototype implementation uses nine actuators on a 220 mm square section of material. It can display a range of surface curvatures under the palm of a user without aliased edges. In this work we use an auxetic skeleton to provide rigidity on a soft material and demonstrate the potential of this class of surface through user experiments

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users

    Creating and controlling visual environments using BonVision.

    Get PDF
    Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices

    Molecular Dynamics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine

    Get PDF
    Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets

    Development of a virtual reality milling machine for knowledge learning and skill training

    Get PDF
    Current methods of training personnel on high cost machine tools involve the use of both classroom and hands on practical training. The practical training required the operation of costly equipment and the trainee has to be under close personnel supervision. The main aim of this project is to reduce the amount of practical training and its inherent cost, time, danger, personal injury risk and material requirements by utilising a virtual reality technology. In this study, an investigation into the use of Virtual reality for training operators and students to use the Milling Machine was carried out. The investigation has been divided into two sections: first the development of Milling Machine in the 3D virtual environment, where the real machine was re-constructed in the virtual space. This has been carried out by creating objects and assembling them together. The complete Milling machine was then properly modelled and rendered so it could be viewed from all viewpoints. The second section was to add motion to the virtual world. The machine was made of functions as for the real machine. This was achieved by attaching Superscape Control Language (SCL) to the objects. The developed Milling machine allows the users to choose the material, speed and feed rate. Upon activation, the virtual machine will be simulated to carry out the machining process and instantaneous data on the machined part can be generated. The results were satisfactory, the Milling Machine was modelled successfully and the machine was able to perform according to task set. Using the developed Virtual Model, the ability for training students and operators to use the Milling Machine has been achieved

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    A Novel Haptic Texture Display Based on Image Processing

    Get PDF

    State of the Art of Virtual Reality Simulation Technology and Its Applications in 2005

    Get PDF
    The School of Mining Engineering at the University of New South Wales (UNSW) has been developing immersive, interactive computer-based training simulators for a number of years with research funding provided by Coal Services (CS), the Australian Coal Association Research Program (ACARP) and the Australian Research Council (ARC). The virtual reality(VR) simulators are being developed to improve the effectiveness of training in the Australian coal mining industry with a view to enhancing health and safety. VR theatres have been established at UNSW and at the Newcastle Mines Rescue Station (NMRS).A range of experienced and inexperienced mining personnel has already had the opportunity to train in them. A capability in immersive, interactive virtual reality training has been established and the reaction to the new technology has been positive and confirmed the benefits to be gained in going to the next stage in developing this capability. Given the significant advances in computer technology that have occurred since this research was initiated at UNSW, it was considered wise to undertake a study of the ‘State of the Art of Virtual Reality Simulation Technology and Its Application in 2005’. This should enable nformed decisions to be made on technologies and techniques that could further enhance the simulators and give insight into how the existing VR capability at UNSW can be placed on a sustainable foundation. This Research Overview summarises the findings of the study. It recommends the continued development and testing of the simulators towards a system that presents the users with hi-fidelity imagery and function that is based on 3D models, developed using real mine plans, safety data and manufacturer’s drawings. The simulators should remain modular in design, such that equipment can be updated and added easily over time. Different mine training scenarios and models based on sound educational principles should be developed with major input from experienced mining industry personnel. The simulations that have been developed, that is, Self-Escape, Rib Stability and Sprains and Strains should also continue to be developed and refined. The study has confirmed that such simulations are a powerful visualisation and training tool for enhancing the understanding of mine safety procedures and operations in the coal mining industry. This Scoping Study was undertaken with funding provided from the JCB Health and Safety Trust administered by Coal Services Pty Limited. The support of the Trust and trustees is gratefully acknowledged. The contributors of information are also gratefully acknowledged
    • …
    corecore