657 research outputs found

    VizLab: The Design and Implementation of An Immersive Virtual Environment System Using Game Engine Technology and Open Source Software

    Get PDF
    Virtual Reality (VR) is a term used to describe computer-simulated environments that can immerse users in a real or unreal world. Immersive systems are an essential component when experiencing virtual environments. Developing VR applications is time-consuming, and developers use many resources in creating VR applications. The separate components require integration, and the challenges in using public domain open source software present complex software development. The VizLab Virtual Reality System was created to meet these challenges and provide an integrated suite of tools for VR system development. VizLab supports the development of VR applications by using game engine and CAVE system technology. The system consists of software modules that provide rendering, texturing, collision, physics, window/viewport management, cluster synchronization, input management, multi-processing, stereoscopic 3D, and networking. VizLab combines the main functional aspects of a game engine and CAVE system for an improved approach to developing VR applications, virtual environments, and immersive environments

    Rapid Prototyping for Virtual Environments

    Get PDF
    Development of Virtual Environment (VE) applications is challenging where application developers are required to have expertise in the target VE technologies along with the problem domain expertise. New VE technologies impose a significant learning curve to even the most experienced VE developer. The proposed solution relies on synthesis to automate the migration of a VE application to a new unfamiliar VE platform/technology. To solve the problem, the Common Scene Definition Framework (CSDF) is developed, that serves as a superset/model representation of the target virtual world. Input modules are developed to populate the framework with the capabilities of the virtual world imported from VRML 2.0 and X3D formats. The synthesis capability is built into the framework to synthesize the virtual world into a subset of VRML 2.0, VRML 1.0, X3D, Java3D, JavaFX, JavaME, and OpenGL technologies, which may reside on different platforms. Interfaces are designed to keep the framework extensible to different and new VE formats/technologies. The framework demonstrated the ability to quickly synthesize a working prototype of the input virtual environment in different VE formats

    Web-based Simulation and Training Environment for Laparoscopic Camera Calibration

    Get PDF
    Endoscopic cameras are increasingly employed in image-guidance procedures, where the video images must be registered to data from other modalities. However, such cameras are susceptible to distortions, requiring calibration before images can be used for registration, tracking and 3D reconstruction. Camera calibration is learned in a laboratory setting, where configuring and adjusting the physical setup is tedious and not necessarily conducive to learning. A centralized resource that utilizes 3D interactive components needs to be available for training on camera calibration. In this project, a web-based training environment for camera calibration is implemented called SimCAM. SimCAM was developed using the Web Graphics Library (WebGL), Open Computer Vision (OpenCV) library, and custom software components. WebGL and OpenCV were used to simulate camera distortions and the calibration task. The main contributions include the implementation and validation of SimCAM. SimCAM was validated with a content validity study, where it was found to be useful as an introduction to camera calibration. Future work involves improving the supporting material and implementing more features, such as uncertainty propogation

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Technical Integration of Hippocampus, Basal Ganglia and Physical Models for Spatial Navigation

    Get PDF
    Computational neuroscience is increasingly moving beyond modeling individual neurons or neural systems to consider the integration of multiple models, often constructed by different research groups. We report on our preliminary technical integration of recent hippocampal formation, basal ganglia and physical environment models, together with visualisation tools, as a case study in the use of Python across the modelling tool-chain. We do not present new modeling results here. The architecture incorporates leaky-integrator and rate-coded neurons, a 3D environment with collision detection and tactile sensors, 3D graphics and 2D plots. We found Python to be a flexible platform, offering a significant reduction in development time, without a corresponding significant increase in execution time. We illustrate this by implementing a part of the model in various alternative languages and coding styles, and comparing their execution times. For very large-scale system integration, communication with other languages and parallel execution may be required, which we demonstrate using the BRAHMS framework's Python bindings

    Computational Modeling of Biological Neural Networks on GPUs: Strategies and Performance

    Get PDF
    Simulating biological neural networks is an important task for computational neuroscientists attempting to model and analyze brain activity and function. As these networks become larger and more complex, the computational power required grows significantly, often requiring the use of supercomputers or compute clusters. An emerging low-cost, highly accessible alternative to many of these resources is the Graphics Processing Unit (GPU) - specialized massively-parallel graphics hardware that has seen increasing use as a general purpose computational accelerator thanks largely due to NVIDIA\u27s CUDA programming interface. We evaluated the relative benefits and limitations of GPU-based tools for large-scale neural network simulation and analysis, first by developing an agent-inspired spiking neural network simulator then by adapting a neural signal decoding algorithm. Under certain network configurations, the simulator was able to outperform an equivalent MPI-based parallel implementation run on a dedicated compute cluster, while the decoding algorithm implementation consistently outperformed its serial counterpart. Additionally, the GPU-based simulator was able to readily visualize network spiking activity in real-time due to the close integration with standard computer graphics APIs. The GPU was shown to provide significant performance benefits under certain circumstances while lagging behind in others. Given the complex nature of these research tasks, a hybrid strategy that combines GPU- and CPU-based approaches provides greater performance than either separately

    Desktop haptic virtual assembly using physically-based part modeling

    Get PDF
    This research investigates the feasibility of using a desktop haptic virtual environment as a design tool for evaluating assembly operations. Bringing virtual reality characteristics to the desktop, such as stereo vision, further promotes the use of this technology into the every day engineering design process. In creating such a system, the affordability and availablity of hardware/software tools is taken into consideration. The resulting application combines several software packages including VR Juggler, ODE (Open Dynamics Engine)/OPAL (Open Physic Abstraction Layer), OpenHaptics, and OpenGL/GLM/GLUT libraries to explore the benefits and limitations of combining haptics with physically-based modeling. The equipment used to display stereo graphics includes a Stereographies Emitter, Crystal Eyes shutter glasses, and a high refresh rate CRT Monitor. One or two-handed force feedback is obtained from various PHANTOM haptic devices from SensAble Technologies. The application\u27s ability to handle complex part interactions is tested using two different computer systems which approximate the higher and lower end of a typical engineer\u27s workstation. Different test scenarios are analyzed and results presented with regards to collision detection and physical response accuracies
    corecore