2,133 research outputs found

    Prop-Based Haptic Interaction with Co-location and Immersion: an Automotive Application

    Get PDF
    Most research on 3D user interfaces aims at providing only a single sensory modality. One challenge is to integrate several sensory modalities into a seamless system while preserving each modality's immersion and performance factors. This paper concerns manipulation tasks and proposes a visuo-haptic system integrating immersive visualization, tactile force and tactile feedback with co-location. An industrial application is presented

    Evaluating 3D pointing techniques

    Get PDF
    "This dissertation investigates various issues related to the empirical evaluation of 3D pointing interfaces. In this context, the term ""3D pointing"" is appropriated from analogous 2D pointing literature to refer to 3D point selection tasks, i.e., specifying a target in three-dimensional space. Such pointing interfaces are required for interaction with virtual 3D environments, e.g., in computer games and virtual reality. Researchers have developed and empirically evaluated many such techniques. Yet, several technical issues and human factors complicate evaluation. Moreover, results tend not to be directly comparable between experiments, as these experiments usually use different methodologies and measures. Based on well-established methods for comparing 2D pointing interfaces this dissertation investigates different aspects of 3D pointing. The main objective of this work is to establish methods for the direct and fair comparisons between 2D and 3D pointing interfaces. This dissertation proposes and then validates an experimental paradigm for evaluating 3D interaction techniques that rely on pointing. It also investigates some technical considerations such as latency and device noise. Results show that the mouse outperforms (between 10% and 60%) other 3D input techniques in all tested conditions. Moreover, a monoscopic cursor tends to perform better than a stereo cursor when using stereo display, by as much as 30% for deep targets. Results suggest that common 3D pointing techniques are best modelled by first projecting target parameters (i.e., distance and size) to the screen plane.

    An Arm-Mounted Accelerometer and Gyro-Based 3D Control System

    Get PDF
    This thesis examines the performance of a wearable accelerometer/gyroscope-based system for capturing arm motions in 3D. Two experiments conforming to ISO 9241-9 specifications for non-keyboard input devices were performed. The first, modeled after the Fitts' law paradigm described in ISO 9241-9, utilized the wearable system to control a telemanipulator compared with joystick control and the user's arm. The throughputs were 5.54 bits/s, 0.74 bits/s and 0.80 bits/s, respectively. The second experiment utilized the wearable system to control a cursor in a 3D fish-tank virtual reality setup. The participants performed a 3D Fitts' law task with three selection methods: button clicks, dwell, and a twist gesture. Error rates were 6.82 %, 0.00% and 3.59 % respectively. Throughput ranged from 0.8 to 1.0 bits/s. The thesis includes detailed analyses on lag and other issues that present user interface challenges for systems that employ human-mounted sensor inputs to control a telemanipulator apparatus

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users

    3D interaction with scientific data : an experimental and perceptual approach

    Get PDF

    A Comparative Study of Desktop, Fishtank, and Cave Systems for the Exploration of Volume Rendered Confocal Data Sets

    Get PDF
    We present a participant study that compares biological data exploration tasks using volume renderings of laser confocal microscopy data across three environments that vary in level of immersion: a desktop, fishtank, and cave system. For the tasks, data, and visualization approach used in our study, we found that subjects qualitatively preferred and quantitatively performed better in the cave compared with the fishtank and desktop. Subjects performed real-world biological data analysis tasks that emphasized understanding spatial relationships including characterizing the general features in a volume, identifying colocated features, and reporting geometric relationships such as whether clusters of cells were coplanar. After analyzing data in each environment, subjects were asked to choose which environment they wanted to analyze additional data sets in - subjects uniformly selected the cave environment

    Real-Time Distributed Aircraft Simulation through HLA

    Get PDF
    This paper presents some ongoing researches carried out in the context of the PRISE (Research Platform for Embedded Systems Engineering) Project. This platform has been designed to evaluate and validate new embedded system concepts and techniques through a special hardware and software environment. Since many actual embedded equipments are not available, their corresponding behavior is simulated using the HLA architecture, an IEEE standard for distributed simulation, and a Run-time infrastructure called CERTI and developed at ONERA. HLA is currently largely used in many simulation applications, but the limited performances of the RTIs raises doubts over the feasibility of HLA federations with real-time requirements. This paper addresses the problem of achieving real-time performances with HLA. Several experiments are discussed using well-known aircraft simulators such as the Microsoft Flight Simulator, FlightGear, and X-plane connected with the CERTI Run-time Infrastructure. The added value of these activities is to demonstrate that according to a set of innovative solutions, HLA is well suited to achieve hard real time constraints

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work

    Spatial cognition in virtual environments

    Get PDF
    Since the last decades of the past century, Virtual Reality (VR) has been developed also as a methodology in research, besides a set of helpful applications in medical field (trainings for surgeons, but also rehabilitation tools). In science, there is still no agreement if the use of this technology in research on cognitive processes allows us to generalize results found in a Virtual Environment (VE) to the human behavior or cognition in the real world. This happens because of a series of differences found in basic perceptual processes (for example, depth perception) suggest a big difference in visual environmental representation capabilities of Virtual scenarios. On the other side, in literature quite a lot of studies can be found, which give a proof of VEs reliability in more than one field (trainings and rehabilitation, but also in some research paradigms). The main aim of this thesis is to investigate if, and in which cases, these two different views can be integrated and shed a new light and insights on the use of VR in research. Through the many experiments conducted in the "Virtual Development and Training Center" of the Fraunhofer Institute in Magdeburg, we addressed both low-level spatial processes (within an "evaluation of distances paradigm") and high-level spatial cognition (using a navigation and visuospatial planning task, called "3D Maps"), trying to address, at the same time, also practical problems as, for example, the use of stereoscopy in VEs or the problem of "Simulator Sickness" during navigation in immersive VEs. The results obtained with our research fill some gaps in literature about spatial cognition in VR and allow us to suggest that the use of VEs in research is quite reliable, mainly if the investigated processes are from the higher level of complexity. In this case, in fact, human brain "adapts" pretty well even to a "new" reality like the one offered by the VR, providing of course a familiarization period and the possibility to interact with the environment; the behavior will then be “like if” the environment was real: what is strongly lacking, at the moment, is the possibility to give a completely multisensorial experience, which is a very important issue in order to get the best from this kind of “visualization” of an artificial world. From a low-level point of view, we can confirm what already found in literature, that there are some basic differences in how our visual system perceives important spatial cues as depth and relationships between objects, and, therefore, we cannot talk about "similar environments" talking about VR and reality. The idea that VR is a "different" reality, offering potentially unlimited possibilities of use, even overcoming some physical limits of the real world, in which this "new" reality can be acquired by our cognitive system just by interacting with it, is therefore discussed in the conclusions of this work
    corecore