33 research outputs found

    Proceedings of the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology, Volume 1

    Get PDF
    These proceedings are organized in the same manner as the conference's contributed sessions, with the papers grouped by topic area. These areas are as follows: VE (virtual environment) training for Space Flight, Virtual Environment Hardware, Knowledge Aquisition for ICAT (Intelligent Computer-Aided Training) & VE, Multimedia in ICAT Systems, VE in Training & Education (1 & 2), Virtual Environment Software (1 & 2), Models in ICAT systems, ICAT Commercial Applications, ICAT Architectures & Authoring Systems, ICAT Education & Medical Applications, Assessing VE for Training, VE & Human Systems (1 & 2), ICAT Theory & Natural Language, ICAT Applications in the Military, VE Applications in Engineering, Knowledge Acquisition for ICAT, and ICAT Applications in Aerospace

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Phenomenal regression as a potential metric of veridical perception in virtual environments

    Get PDF
    It is known that limitations of the visual presentation and sense of presence in a virtual environment (VE) can result in deficits of spatial perception such as the documented depth compression phenomena. Investigating size and distance percepts in a VE is an active area of research, where different groups have measured the deficit by employing skill-based tasks such as walking, throwing or simply judging sizes and distances. A psychological trait called phenomenal regression (PR), first identified in the 1930s by Thouless, offers a measure that does not rely on either judgement or skill. PR describes a systematic error made by subjects when asked to match the perspective projections of two stimuli displayed at different distances. Thouless’ work found that this error is not mediated by a subject’s prior knowledge of its existence, nor can it be consciously manipulated, since it measures an individual’s innate reaction to visual stimuli. Furthermore he demonstrated that, in the real world, PR is affected by the depth cues available for viewing a scene. When applied in a VE, PR therefore potentially offers a direct measure of perceptual veracity that is independent of participants’ skill in judging size or distance. Experimental work has been conducted and a statistically significant correlation of individuals’ measured PR values (their ‘Thouless ratio’, or TR) between virtual and physical stimuli was found. A further experiment manipulated focal depth to mitigate the mismatch that occurs between accommodation and vergence cues in a VE. The resulting statistically significant effect on TR demonstrates that it is sensitive to changes in viewing conditions in a VE. Both experiments demonstrate key properties of PR that contribute to establishing it as a robust indicator of VE quality. The first property is that TR exhibits temporal stability during the period of testing and the second is that it differs between individuals. This is advantageous as it yields empirical values that can be investigated using regression analysis. This work contributes to VE domains in which it is desirable to replicate an accurate perception of space, such as training and telepresence, where PR would be a useful tool for comparing subjective experience between a VE and the real world, or between different VEs

    Laboratory Directed Research and Development Annual Report - Fiscal Year 2000

    Full text link

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Militarized visualities

    Get PDF
    corecore