1,452 research outputs found

    Investigating Embodied Interaction in Near-Field Perception-Action Re-Calibration on Performance in Immersive Virtual Environments

    Get PDF
    Immersive Virtual Environments (IVEs) are becoming more accessible and more widely utilized for training. Previous research has shown that the matching of visual and proprioceptive information is important for calibration. Many state-of-the art Virtual Reality (VR) systems, commonly known as Immersive Virtual Environments (IVE), are created for training users in tasks that require accurate manual dexterity. Unfortunately, these systems can suffer from technical limitations that may force de-coupling of visual and proprioceptive information due to interference, latency, and tracking error. It has also been suggested that closed-loop feedback of travel and locomotion in an IVE can overcome compression of visually perceived depth in medium field distances in the virtual world [33, 47]. Very few experiments have examined the carryover effects of multi-sensory feedback in IVEs during manual dexterous 3D user interaction in overcoming distortions in near-field or interaction space depth perception, and the relative importance of visual and proprioceptive information in calibrating users\u27 distance judgments. In the first part of this work, we examined the recalibration of movements when the visually reached distance is scaled differently than the physically reached distance. We present an empirical evaluation of how visually distorted movements affects users\u27 reach to near field targets in an IVE. In a between subjects design, participants provided manual reaching distance estimates during three sessions; a baseline measure without feedback (open-loop distance estimation), a calibration session with visual and proprioceptive feedback (closed-loop distance estimation), and a post-interaction session without feedback (open-loop distance estimation). Subjects were randomly assigned to one of three visual feedbacks in the closed-loop condition during which they reached to target while holding a tracked stylus: i) Minus condition (-20% gain condition) in which the visual stylus appeared at 80\% of the distance of the physical stylus, ii) Neutral condition (0% or no gain condition) in which the visual stylus was co-located with the physical stylus, and iii) Plus condition (+20% gain condition) in which the visual stylus appeared at 120% of the distance of the physical stylus. In all the conditions, there is evidence of visuo-motor calibration in that users\u27 accuracy in physically reaching to the target locations improved over trials. Scaled visual feedback was shown to calibrate distance judgments within an IVE, with estimates being farthest in the post-interaction session after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition). The same pattern was observed during closed-loop physical reach responses, participants generally tended to physically reach farther in Minus condition and closer in Plus condition to the perceived location of the targets, as compared to Neutral condition in which participants\u27 physical reach was more accurate to the perceived location of the target. We then characterized the properties of human reach motion in the presence or absence of visuo-haptic feedback in real and IVEs within a participant\u27s maximum arm reach. Our goal is to understand how physical reaching actions to the perceived location of targets in the presence or absence of visuo-haptic feedback are different between real and virtual viewing conditions. Typically, participants reach to the perceived location of objects in the 3D environment to perform selection and manipulation actions during 3D interaction in applications such as virtual assembly or rehabilitation. In these tasks, participants typically have distorted perceptual information in the IVE as compared to the real world, in part due to technological limitations such as minimal visual field of view, resolution, latency and jitter. In an empirical evaluation, we asked the following questions; i) how do the perceptual differences between virtual and real world affect our ability to accurately reach to the locations of 3D objects, and ii) how do the motor responses of participants differ between the presence or absence of visual and haptic feedback? We examined factors such as velocity and distance of physical reaching behavior between the real world and IVE, both in the presence or absence of visuo-haptic information. The results suggest that physical reach responses vary systematically between real and virtual environments especially in situations involving presence or absence of visuo-haptic feedback. The implications of our study provide a methodological framework for the analysis of reaching motions for selection and manipulation with novel 3D interaction metaphors and to successfully characterize visuo-haptic versus non-visuo-haptic physical reaches in virtual and real world situations. While research has demonstrated that self-avatars can enhance ones\u27 sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. Thus, we investigated the effect of visual fidelity of the self-avatar in enhancing the user\u27s depth judgments, reach boundary perception and properties of physical reach motion. Previous research has demonstrated that self-avatar representation of the user enhances the sense of presence [37] and even a static notion of an avatar can improve distance estimation in far distances [59, 48]. In this study, performance with a virtual avatar was also compared to real-world performance. Three levels of fidelity were tested; 1) an immersive self-avatar with realistic limbs, 2) a low-fidelity self-avatar showing only joint locations, and 3) end-effector only. There were four primary hypotheses; First, we hypothesize that just the existence of self-avatar or end-effector position would calibrate users\u27 interaction space depth perception in an IVE. Therefore, participants\u27 distance judgments would be improved after the calibration phase regardless of self-avatars\u27 visual fidelity. Second, the magnitude of the changes from pre-test to post-test would be significantly different based on the visual details of the self-avatar presented to the participants (self-avatar vs low-fidelity self-avatar and end-effector). Third, we predict distance estimation accuracy would be the highest in immersive self-avatar condition and the lowest in end-effector condition. Forth, we predict that the properties of physical reach responses vary systematically between different visual fidelity conditions. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions. There was also an effect of the phase where the reach estimate became more accurate after receiving feedback in calibration phase. Overall, in all conditions reach estimations became more accurate after receiving feedback during a calibration phase. Lastly, we examined factors such as path length, time to complete the task, average velocity and acceleration of physical reach motion and compared all the IVEs conditions with real-world. The results suggest that physical reach responses vary systematically between the VR viewing conditions and real-world

    Virtually the same? How impaired sensory information in virtual reality may disrupt vision for action

    Get PDF
    This is the final version. Available on open access from Springer via the DOI in this recordVirtual reality (VR) is a promising tool for expanding the possibilities of psychological experimentation and implementing immersive training applications. Despite a recent surge in interest, there remains an inadequate understanding of how VR impacts basic cognitive processes. Due to the artificial presentation of egocentric distance cues in virtual environments, a number of cues to depth in the optic array are impaired or placed in conflict with each other. Moreover, realistic haptic information is all but absent from current VR systems. The resulting conflicts could impact not only the execution of motor skills in VR but also raise deeper concerns about basic visual processing, and the extent to which virtual objects elicit neural and behavioural responses representative of real objects. In this brief review, we outline how the novel perceptual environment of VR may affect vision for action, by shifting users away from a dorsal mode of control. Fewer binocular cues to depth, conflicting depth information and limited haptic feedback may all impair the specialised, efficient, online control of action characteristic of the dorsal stream. A shift from dorsal to ventral control of action may create a fundamental disparity between virtual and real-world skills that has important consequences for how we understand perception and action in the virtual world.Royal Academy of Engineering (RAE

    Low Latency Rendering with Dataflow Architectures

    Get PDF
    The research presented in this thesis concerns latency in VR and synthetic environments. Latency is the end-to-end delay experienced by the user of an interactive computer system, between their physical actions and the perceived response to these actions. Latency is a product of the various processing, transport and buffering delays present in any current computer system. For many computer mediated applications, latency can be distracting, but it is not critical to the utility of the application. Synthetic environments on the other hand attempt to facilitate direct interaction with a digitised world. Direct interaction here implies the formation of a sensorimotor loop between the user and the digitised world - that is, the user makes predictions about how their actions affect the world, and see these predictions realised. By facilitating the formation of the this loop, the synthetic environment allows users to directly sense the digitised world, rather than the interface, and induce perceptions, such as that of the digital world existing as a distinct physical place. This has many applications for knowledge transfer and efficient interaction through the use of enhanced communication cues. The complication is, the formation of the sensorimotor loop that underpins this is highly dependent on the fidelity of the virtual stimuli, including latency. The main research questions we ask are how can the characteristics of dataflow computing be leveraged to improve the temporal fidelity of the visual stimuli, and what implications does this have on other aspects of the fidelity. Secondarily, we ask what effects latency itself has on user interaction. We test the effects of latency on physical interaction at levels previously hypothesized but unexplored. We also test for a previously unconsidered effect of latency on higher level cognitive functions. To do this, we create prototype image generators for interactive systems and virtual reality, using dataflow computing platforms. We integrate these into real interactive systems to gain practical experience of how the real perceptible benefits of alternative rendering approaches, but also what implications are when they are subject to the constraints of real systems. We quantify the differences of our systems compared with traditional systems using latency and objective image fidelity measures. We use our novel systems to perform user studies into the effects of latency. Our high performance apparatuses allow experimentation at latencies lower than previously tested in comparable studies. The low latency apparatuses are designed to minimise what is currently the largest delay in traditional rendering pipelines and we find that the approach is successful in this respect. Our 3D low latency apparatus achieves lower latencies and higher fidelities than traditional systems. The conditions under which it can do this are highly constrained however. We do not foresee dataflow computing shouldering the bulk of the rendering workload in the future but rather facilitating the augmentation of the traditional pipeline with a very high speed local loop. This may be an image distortion stage or otherwise. Our latency experiments revealed that many predictions about the effects of low latency should be re-evaluated and experimenting in this range requires great care

    The Effect of Anthropometric Properties of Self-Avatars on Action Capabilities in Virtual Reality

    Get PDF
    The field of Virtual Reality (VR) has seen a steady exponential uptake in the last decade and is being continuously incorporated into areas of popular interest like healthcare, training, recreation and gaming. This steady upward trend and prolonged popularity has resulted in numerous extravagant virtual environments, some that aim to mimic real-life experiences like combat training, while others intend to provide unique experiences that may otherwise be difficult to recreate like flying over ancient Egypt as a bird. These experiences often showcase highly realistic graphics, intuitive interactions and unique avatar embodiment scenarios with the help of various tracking sensors, high definition graphic displays, sound systems, etc. The literature suggests that estimates and affordance judgments in VR scenarios such as the ones described above are affected by the properties and the nature of the avatar embodied by the user. Therefore, to provide users with the finest experiences it is crucial to understand the interaction between the embodied self and the action capabilities afforded by it in the surrounding virtual environment. In a series of studies aimed at exploring the effect of gender matched body-scaled self-avatars on the user\u27s perception, we investigate the effect of self-avatars on the perception of size of objects in an immersive virtual environment (IVE) and how this perception affects the actions one can perform as compared to the real world. In the process, we make use of newer tracking technology and graphic displays to investigate the perceived differences between real world environments and their virtual counterparts to understand how the spatial properties of the environment and the embodied self-avatars affect affordances by means of passability judgments. We describe techniques for creation and mapping VR environments onto their real world counterparts and the creation of gender matched body-scaled self-avatars that provides real time full-body tracking. The first two studies investigate how newer graphical displays and off-the-shelf tracking devices can be utilized to create salient gender matched body-scaled self-avatars and their effect on the judgment of passability as a result of the embodied body schema. The study involves creating complex scripts that automate the process of mapping virtual worlds onto their real world counterparts within a 1cm margin of error and the creation of self-avatars that match height, limb proportions and shoulder width of the participant using tracking sensors. The experiment involves making judgments about the passability of an adjustable doorway in the real world and in a virtual to-scale replica of the real world environment. The results demonstrated that the perception of affordances in IVEs is comparable to the real world but the behavior leading to it differs in VR. Also, the body-scaled self-avatars generated provide salient information yielding performance similar to the real world. Several insights and guidelines related to creating veridical virtual environments and realistic self-avatars were achieved from this effort. The third study investigates how the presence of body-scaled self-avatars affects the perception of size of virtual handheld objects and the influence of the person-plus-virtual-object system created by lifting the said virtual object on passability. This is crucial to understand as VR simulations now often utilize self-avatars that carry objects while maneuvering through the environment. How they interact with these handheld objects can influence what they do in critical scenarios where split second decisions can change the outcome like combat training, role-playing games, first person shooting, thrilling rides, physiotherapy, etc. It has also been reported that the avatar itself can influence the perception of size of virtual objects, in turn influencing action capabilities. There is ample research on different interaction techniques to manipulate objects in a virtual world but the question about how the objects affect our action capabilities upon interaction remains unanswered, especially when the haptic feedback associated with holding a real object is mismatched or missing. The study investigates this phenomenon by having participants interact with virtual objects of different sizes and making frontal and lateral passability judgments to an adjustable aperture similar to the first experiment. The results suggest that the presence of self-avatars significantly affects affordance judgments. Interestingly, frontal and lateral judgments in IVEs seem to similar unlike the real world. Investigating the concept of embodied body schema and its influence on action-capabilities further, the fourth study looks at how embodying self-avatars that may vary slightly from your real world body affect performance and behavior in dynamic affordance scenarios. In this particular study, we change the eye height of the participants in the presence or absence of self-avatars that are either bigger, smaller or the same size as the participant. We then investigate how this change in eye height and anthropometric properties of the self-avatar affects their judgments when crossing streets with oncoming traffic in virtual reality. We also evaluate any changes in the perceived walking speed as a result of embodying altered self-avatars. The findings suggest that the presence of self-avatars results in safer crossing behavior, however scaling the eye height or the avatar does not seem to affect the perceived walking speed. A detailed discussion on all the findings can be found in the manuscript

    Improving everyday computing tasks with head-mounted displays

    Get PDF
    The proliferation of consumer-affordable head-mounted displays (HMDs) has brought a rash of entertainment applications for this burgeoning technology, but relatively little research has been devoted to exploring its potential home and office productivity applications. Can the unique characteristics of HMDs be leveraged to improve users’ ability to perform everyday computing tasks? My work strives to explore this question. One significant obstacle to using HMDs for everyday tasks is the fact that the real world is occluded while wearing them. Physical keyboards remain the most performant devices for text input, yet using a physical keyboard is difficult when the user can’t see it. I developed a system for aiding users typing on physical keyboards while wearing HMDs and performed a user study demonstrating the efficacy of my system. Building on this foundation, I developed a window manager optimized for use with HMDs and conducted a user survey to gather feedback. This survey provided evidence that HMD-optimized window managers can provide advantages that are difficult or impossible to achieve with standard desktop monitors. Participants also provided suggestions for improvements and extensions to future versions of this window manager. I explored the issue of distance compression, wherein users tend to underestimate distances in virtual environments relative to the real world, which could be problematic for window managers or other productivity applications seeking to leverage the depth dimension through stereoscopy. I also investigated a mitigation technique for distance compression called minification. I conducted multiple user studies, providing evidence that minification makes users’ distance judgments in HMDs more accurate without causing detrimental perceptual side effects. This work also provided some valuable insight into the human perceptual system. Taken together, this work represents valuable steps toward leveraging HMDs for everyday home and office productivity applications. I developed functioning software for this purpose, demonstrated its efficacy through multiple user studies, and also gathered feedback for future directions by having participants use this software in simulated productivity tasks

    Design For Auditory Displays: Identifying Temporal And Spatial Information Conveyance Principles

    Get PDF
    Designing auditory interfaces is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to use sounds in today\u27s visually-rich graphical user interfaces. This dissertation provided a framework for guiding the design of audio interfaces to enhance human-systems performance. This doctoral research involved reviewing the literature on conveying temporal and spatial information using audio, using this knowledge to build three theoretical models to aid the design of auditory interfaces, and empirically validating select components of the models. The three models included an audio integration model that outlines an end-to-end process for adding sounds to interactive interfaces, a temporal audio model that provides a framework for guiding the timing for integration of these sounds to meet human performance objectives, and a spatial audio model that provides a framework for adding spatialization cues to interface sounds. Each model is coupled with a set of design guidelines theorized from the literature, thus combined, the developed models put forward a structured process for integrating sounds in interactive interfaces. The developed models were subjected to a three phase validation process that included review by Subject Matter Experts (SMEs) to assess the face validity of the developed models and two empirical studies. For the SME review, which assessed the utility of the developed models and identified opportunities for improvement, a panel of three audio experts was selected to respond to a Strengths, Weaknesses, Opportunities, and Threats (SWOT) validation questionnaire. Based on the SWOT analysis, the main strengths of the models included that they provide a systematic approach to auditory display design and that they integrate a wide variety of knowledge sources in a concise manner. The main weaknesses of the models included the lack of a structured process for amending the models with new principles, some branches were not considered parallel or completely distinct, and lack of guidance on selecting interface sounds. The main opportunity identified by the experts was the ability of the models to provide a seminal body of knowledge that can be used for building and validating auditory display designs. The main threats identified by the experts were that users may not know where to start and end with each model, the models may not provide comprehensive coverage of all uses of auditory displays, and the models may act as a restrictive influence on designers or they may be used inappropriately. Based on the SWOT analysis results, several changes were made to the models prior to the empirical studies. Two empirical evaluation studies were conducted to test the theorized design principles derived from the revised models. The first study focused on assessing the utility of audio cues to train a temporal pacing task and the second study combined both temporal (i.e., pace) and spatial audio information, with a focus on examining integration issues. In the pace study, there were four different auditory conditions used for training pace: 1) a metronome, 2) non-spatial auditory earcons, 3) a spatialized auditory earcon, and 4) no audio cues for pace training. Sixty-eight people participated in the study. A pre- post between subjects experimental design was used, with eight training trials. The measure used for assessing pace performance was the average deviation from a predetermined desired pace. The results demonstrated that a metronome was not effective in training participants to maintain a desired pace, while, spatial and non-spatial earcons were effective strategies for pace training. Moreover, an examination of post-training performance as compared to pre-training suggested some transfer of learning. Design guidelines were extracted for integrating auditory cues for pace training tasks in virtual environments. In the second empirical study, combined temporal (pacing) and spatial (location of entities within the environment) information were presented. There were three different spatialization conditions used: 1) high fidelity using subjective selection of a best-fit head related transfer function, 2) low fidelity using a generalized head-related transfer function, and 3) no spatialization. A pre- post between subjects experimental design was used, with eight training trials. The performance measures were average deviation from desired pace and time and accuracy to complete the task. The results of the second study demonstrated that temporal, non-spatial auditory cues were effective in influencing pace while other cues were present. On the other hand, spatialized auditory cues did not result in significantly faster task completion. Based on these results, a set of design guidelines was proposed that can be used to direct the integration of spatial and temporal auditory cues for supporting training tasks in virtual environments. Taken together, the developed models and the associated guidelines provided a theoretical foundation from which to direct user-centered design of auditory interfaces

    Toward New Ecologies of Cyberphysical Representational Forms, Scales, and Modalities

    Get PDF
    Research on tangible user interfaces commonly focuses on tangible interfaces acting alone or in comparison with screen-based multi-touch or graphical interfaces. In contrast, hybrid approaches can be seen as the norm for established mainstream interaction paradigms. This dissertation describes interfaces that support complementary information mediations, representational forms, and scales toward an ecology of systems embodying hybrid interaction modalities. I investigate systems combining tangible and multi-touch, as well as systems combining tangible and virtual reality interaction. For each of them, I describe work focusing on design and fabrication aspects, as well as work focusing on reproducibility, engagement, legibility, and perception aspects

    Augmented reality and scene examination

    Get PDF
    The research presented in this thesis explores the impact of Augmented Reality on human performance, and compares this technology with Virtual Reality using a head-mounted video-feed for a variety of tasks that relate to scene examination. The motivation for the work was the question of whether Augmented Reality could provide a vehicle for training in crime scene investigation. The Augmented Reality application was developed using fiducial markers in the Windows Presentation Foundation, running on a wearable computer platform; Virtual Reality was developed using the Crytek game engine to present a photo-realistic 3D environment; and a video-feed was provided through head-mounted webcam. All media were presented through head-mounted displays of similar resolution to provide the sole source of visual information to participants in the experiments. The experiments were designed to increase the amount of mobility required to conduct the search task, i.e., from rotation in the horizontal or vertical plane through to movement around a room. In each experiment, participants were required to find objects and subsequently recall their location. It is concluded that human performance is affected not merely via the medium through which the world is perceived but moreover, the constraints governing how movement in the world is controlled
    • …
    corecore