11,756 research outputs found

    Integrative visual augmentation content and its optimization based on human visual processing

    Get PDF
    In many daily visual tasks, our brain is remarkably good at prioritizing visual information. Nonetheless, it is undoubtedly not always capable of performing optimally, and all the more so in the ever-evolving demanding world. Supplementary visual guidance could enrich our lives from many perspectives on the individual and population scales. Through rapid technological advancements such as VR and AR systems, diverse visual cues demonstrate a powerful potential to deliberately guide attention and improve users’ performance in daily tasks. Currently, existing solutions are confronting the challenge of overloading and overruling the natural strategy of the user with excessive visual information once digital content is superimposed on the real-world environment. The subtle nature of augmentation content, which considers human visual processing factors, is an essential milestone towards developing adaptive, supportive, and not overwhelming AR systems. The focus of the present thesis was, thus, to investigate how the manipulation of spatial and temporal properties of visual cues affects human performance. Based on the findings of three studies published in peer-reviewed journals, I consider various everyday challenging settings and propose perceptually optimal augmentation solutions. I furthermore discuss possible extensions of the present work and recommendations for future research in this exciting field

    gEYEded: Subtle and Challenging Gaze-Based Player Guidance in Exploration Games

    Get PDF
    This paper investigates the effects of gaze-based player guidance on the perceived game experience, performance, and challenge in a first-person exploration game. In contrast to existing research, the proposed approach takes the game context into account by providing players not only with guidance but also granting them an engaging game experience with a focus on exploration. This is achieved by incorporating gaze-sensitive areas that indicate the location of relevant game objects. A comparative study was carried out to validate our concept and to examine if a game supported with a gaze guidance feature triggers a more immersive game experience in comparison to a crosshair guidance version and a solution without any guidance support. In general, our study findings reveal a more positive impact of the gaze-based guidance approach on the experience and performance in comparison to the other two conditions. However, subjects had a similar impression concerning the game challenge in all conditions

    Real virtuality: emerging technology for virtually recreating reality

    Get PDF

    The use of embedded context-sensitive attractors for clinical walking test guidance in virtual reality

    Get PDF
    Virtual reality is increasingly used in rehabilitation and can provide additional motivation when working towards therapeutic goals. However, a particular problem for patients regards their ability to plan routes in unfamiliar environments. Therefore, the aim of this study was to explore how visual cues, namely embedded context-sensitive attractors, can guide attention and walking direction in VR, for clinical walking interventions. This study was designed using a butterfly as the embedded context- sensitive attractor, to guide participant locomotion around the clinical figure of eight walk test, to limit the use of verbal instructions. We investigated the effect of varying the number of attractors for figure of eight path following, and whether there are any negative impacts on perceived autonomy or workload. A total of 24 participants took part in the study and completed six attractor conditions in a counterbalanced order. They also experienced a control VE (no attractors) at the beginning and end of the protocol. Each VE condition lasted a duration of 1 minute and manipulated the number of attractors to either singular or multiple alongside, the placement of turning markers (virtual trees) used to represent the cones used in clinical settings for the figure of eight walk test. Results suggested that embedded context-sensitive attractors can be used to guide walking direction, following a figure of eight in VR without impacting perceived autonomy, and workload. However, there appears to be a saturation point, with regards to effectiveness of attractors. Too few objects in a VE may reduce feelings of intrinsic motivation, and too many objects in a VE may reduce the effectiveness of attractors for guiding individuals along a figure of eight path. We conclude by indicating future research directions, for attractors and their use as a guide for walking direction

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research

    Full text link
    Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.Comment: Accepted IEEE VR 2023 conference pape

    LoCoMoTe – a framework for classification of natural locomotion in VR by task, technique and modality

    Get PDF
    Virtual reality (VR) research has provided overviews of locomotion techniques, how they work, their strengths and overall user experience. Considerable research has investigated new methodologies, particularly machine learning to develop redirection algorithms. To best support the development of redirection algorithms through machine learning, we must understand how best to replicate human navigation and behaviour in VR, which can be supported by the accumulation of results produced through live-user experiments. However, it can be difficult to identify, select and compare relevant research without a pre-existing framework in an ever-growing research field. Therefore, this work aimed to facilitate the ongoing structuring and comparison of the VR-based natural walking literature by providing a standardised framework for researchers to utilise. We applied thematic analysis to study methodology descriptions from 140 VR-based papers that contained live-user experiments. From this analysis, we developed the LoCoMoTe framework with three themes: navigational decisions, technique implementation, and modalities. The LoCoMoTe framework provides a standardised approach to structuring and comparing experimental conditions. The framework should be continually updated to categorise and systematise knowledge and aid in identifying research gaps and discussions
    • …
    corecore