13 research outputs found

    Improving everyday computing tasks with head-mounted displays

    Get PDF
    The proliferation of consumer-affordable head-mounted displays (HMDs) has brought a rash of entertainment applications for this burgeoning technology, but relatively little research has been devoted to exploring its potential home and office productivity applications. Can the unique characteristics of HMDs be leveraged to improve users’ ability to perform everyday computing tasks? My work strives to explore this question. One significant obstacle to using HMDs for everyday tasks is the fact that the real world is occluded while wearing them. Physical keyboards remain the most performant devices for text input, yet using a physical keyboard is difficult when the user can’t see it. I developed a system for aiding users typing on physical keyboards while wearing HMDs and performed a user study demonstrating the efficacy of my system. Building on this foundation, I developed a window manager optimized for use with HMDs and conducted a user survey to gather feedback. This survey provided evidence that HMD-optimized window managers can provide advantages that are difficult or impossible to achieve with standard desktop monitors. Participants also provided suggestions for improvements and extensions to future versions of this window manager. I explored the issue of distance compression, wherein users tend to underestimate distances in virtual environments relative to the real world, which could be problematic for window managers or other productivity applications seeking to leverage the depth dimension through stereoscopy. I also investigated a mitigation technique for distance compression called minification. I conducted multiple user studies, providing evidence that minification makes users’ distance judgments in HMDs more accurate without causing detrimental perceptual side effects. This work also provided some valuable insight into the human perceptual system. Taken together, this work represents valuable steps toward leveraging HMDs for everyday home and office productivity applications. I developed functioning software for this purpose, demonstrated its efficacy through multiple user studies, and also gathered feedback for future directions by having participants use this software in simulated productivity tasks

    Distance Perception in Virtual Environment through Head-mounted Displays

    Get PDF
    Head-mounted displays (HMDs) are popular and affordable wearable display devices which facilitate immersive and interactive viewing experience. Numerous studies have reported that people typically underestimate distances in HMDs. This dissertation describes a series of research experiments that examined the influence of FOV and peripheral vision on distance perception in HMDs and attempts to provide useful information to HMD manufacturers and software developers to improve perceptual performance of HMD-based virtual environments. This document is divided into two main parts. The first part describes two experiments that examined distance judgments in Oculus Rift HMDs. Unlike numerous studies found significant distance compression, our Experiment I & II using the Oculus DK1 and DK2 found that people could judge distances near-accurately between 2 to 5 meters. In the second part of this document, we describe four experiments that examined the influence of FOV and human periphery on distance perception in HMDs and explored some potential approaches of augmenting peripheral vision in HMDs. In Experiment III, we reconfirmed the peripheral stimulation effect found by Jones et al. using bright peripheral frames. We also discovered that there is no linear correlation between the stimulation and peripheral brightness. In Experiment IV, we examined the interaction between the peripheral brightness and distance judgments using peripheral frames with different relative luminances. We found that there exists a brightness threshold; i.e., a minimum brightness level that\u27s required to trigger the peripheral stimulation effect which improves distance judgments in HMD-based virtual environments. In Experiment V, we examined the influence of applying a pixelation effect in the periphery which simulates the visual experience of having a peripheral low-resolution display around viewports. The result showed that adding the pixelated peripheral frame significantly improves distance judgments in HMDs. Lastly, our Experiment VI examined the influence of image size and shape in HMDs on distance perception. We found that making the frame thinner to increase the FOV of imagery improves the distance judgments. The result supports the hypothesis that FOV influences distance judgments in HMDs. It also suggests that the image shape may have no influence on distance judgments in HMDs

    A perceptual calibration method to ameliorate the phenomenon of non-size-constancy in hetereogeneous VR displays

    Get PDF
    The interception of the action-perception loop in virtual reality [VR] causes that understanding the effects of different display factors in spatial perception becomes a challenge. For example, studies have reported that there is not size-constancy, the perceived size of an object does not remain constant as its distance increases. This phenomenon is closely related to the reports of underestimation of distances in VR, which causes remain unclear. Despite the efforts improving the spatial cues regarding display technology and computer graphics, some interest has started to focus on the human side. In this study, we propose a perceptual calibration method which can ameliorate the effects of non-size-constancy in heterogeneous VR displays. The method was validated in a perceptual matching experiment comparing the performance between an HTC Vive HMD and a four-walls CAVE system. Results show that perceptual calibration based on interpupillary distance increments can solve partially the phenomenon of non-size-constancy in VR

    The Effects of Head-Centric Rest Frames on Egocentric Distance Perception in Virtual Reality

    Get PDF
    It has been shown through several research investigations that users tend to underestimate distances in virtual reality (VR). Virtual objects that appear close to users wearing a Head-mounted display (HMD) might be located at a farther distance in reality. This discrepancy between the actual distance and the distance observed by users in VR was found to hinder users from benefiting from the full in-VR immersive experience, and several efforts have been directed toward finding the causes and developing tools that mitigate this phenomenon. One hypothesis that stands out in the field of spatial perception is the rest frame hypothesis (RFH), which states that visual frames of reference (RFs), defined as fixed reference points of view in a virtual environment (VE), contribute to minimizing sensory mismatch. RFs have been shown to promote better eye-gaze stability and focus, reduce VR sickness, and improve visual search, along with other benefits. However, their effect on distance perception in VEs has not been evaluated. To explore and better understand the potential effects that RFs can have on distance perception in VR, we used a blind walking task to explore the effect of three head-centric RFs (a mesh mask, a nose, and a hat) on egocentric distance estimation. We performed a mixed-design study where we compared the effect of each of our chosen RFs across different environmental conditions and target distances in different 3D environments. We found that at near and mid-field distances, certain RFs can improve the user\u27s distance estimation accuracy and reduce distance underestimation. Additionally, we found that participants judged distance more accurately in cluttered environments compared to uncluttered environments. Our findings show that the characteristics of the 3D environment are important in distance estimation-dependent tasks in VR and that the addition of head-centric RFs, a simple avatar augmentation method, can lead to meaningful improvements in distance judgments, user experience, and task performance in VR

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles

    Sensorimotor interfaces : towards enactivity in HCI

    Get PDF
    This thesis explores the application of enactive techniques to human computer interaction, focusing on how devices following ‘sensorimotor’ principles can be blended with interface goals to lead to new perceptual experiences. Building sensorimotor interfaces is an exciting, emerging ïŹeld of research facing challenges surrounding application, design, training and uptake. To tackle these challenges, this thesis cuts a line of investigation from a review of enactivity in the related ïŹeld of sensory substitution and augmentation devices, to a schematic taxonomy, model and design guide of ‘the sensorimotor interface’; developed from a theoretically-grounded, enactive approach to cognition. Device, interaction and training guidelines are drawn from this model, formalising the application of the enactive approach to HCI. A readily-available consumer device is then characterised and calibrated in preparation for testing the model validity and associated insights. The process highlights the effects of accessible, easily-implemented calibrations, and the importance of mixed-method approaches in assessing sensorimotor interface potential. The calibrated device is utilised to conduct a detailed, methodological investigation into how concurrently available sensory information affects and contributes to uptake of novel sensorimotor skills. Robust statistical modelling concludes that sensory concurrency has a profound effect on the comprehension and integration of enactive haptic signals, and that efforts to carefully control the nature and degree of sensory concurrency improve user comprehension and enjoyability when engaging with novel sensorimotor tasks, while reducing confusion and stress. The work is concluded by speculation on how the presented derivations, methods and observations can be used to directly inïŹ‚uence future sensorimotor interface design in HCI. This thesis therefore constitutes a primer to the principles and history of sensory substitution and augmentation, details the requirements and limitations of the enactive approach in academia and industry, and brings enactivity forward as an accessible, viable and exciting methodology in interaction design

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    The visual perception of distance in action space.

    Get PDF
    This work examines our perception of distance within action space (about 2m ~ 30m), an ability that is important for various actions. Two general problems are addressed: what information can be used to judge distance accurately and how is it processed? The dissertation is in two parts. The first part considers the what question. Subjects\u27 distance judgment was examined in real, altered and virtual environments by using perceptual tasks or actions to assess the role of a variety of intrinsic and environmental depth cues. The findings show that the perception of angular declination, or height in the visual field, is largely veridical and a target is visually located on the projection line from the observer\u27s eyes to it. It is also shown that a continuous ground texture is essential for veridical space perception. Of multiple textural cues, linear perspective is a strong cue for representing the ground and hence judging distance but compression is a relatively ineffective cue. In the second part, the sequential surface integration process (SSIP) hypothesis is proposed to understand the processing of depth information. The hypothesis asserts that an accurate representation of the ground surface is critical for veridical space perception and a global ground representation is formed by an integrative process that samples and combines local information over space and time. Confirming this, the experiments found that information from an extended ground area is necessary for judging distance accurately and distance was underestimated when an observer\u27s view was restricted to the local ground area about the target. The SSIP hypothesis also suggests that, to build an accurate ground representation, the integrative process might start from near space where rich depth cues can provide for a reliable initial representation and then progressively extend to distant areas. This is also confirmed by the finding that subjects could judge distance accurately by scanning local patches of the ground surface from near to far, but not in the reverse direction

    The role of phonology in visual word recognition: evidence from Chinese

    Get PDF
    Posters - Letter/Word Processing V: abstract no. 5024The hypothesis of bidirectional coupling of orthography and phonology predicts that phonology plays a role in visual word recognition, as observed in the effects of feedforward and feedback spelling to sound consistency on lexical decision. However, because orthography and phonology are closely related in alphabetic languages (homophones in alphabetic languages are usually orthographically similar), it is difficult to exclude an influence of orthography on phonological effects in visual word recognition. Chinese languages contain many written homophones that are orthographically dissimilar, allowing a test of the claim that phonological effects can be independent of orthographic similarity. We report a study of visual word recognition in Chinese based on a mega-analysis of lexical decision performance with 500 characters. The results from multiple regression analyses, after controlling for orthographic frequency, stroke number, and radical frequency, showed main effects of feedforward and feedback consistency, as well as interactions between these variables and phonological frequency and number of homophones. Implications of these results for resonance models of visual word recognition are discussed.postprin

    Interactive effects of orthography and semantics in Chinese picture naming

    Get PDF
    Posters - Language Production/Writing: abstract no. 4035Picture-naming performance in English and Dutch is enhanced by presentation of a word that is similar in form to the picture name. However, it is unclear whether facilitation has an orthographic or a phonological locus. We investigated the loci of the facilitation effect in Cantonese Chinese speakers by manipulating—at three SOAs (2100, 0, and 1100 msec)—semantic, orthographic, and phonological similarity. We identified an effect of orthographic facilitation that was independent of and larger than phonological facilitation across all SOAs. Semantic interference was also found at SOAs of 2100 and 0 msec. Critically, an interaction of semantics and orthography was observed at an SOA of 1100 msec. This interaction suggests that independent effects of orthographic facilitation on picture naming are located either at the level of semantic processing or at the lemma level and are not due to the activation of picture name segments at the level of phonological retrieval.postprin
    corecore