11,071 research outputs found
MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research
Peripheral vision plays a significant role in human perception and
orientation. However, its relevance for human-computer interaction, especially
head-mounted displays, has not been fully explored yet. In the past, a few
specialized appliances were developed to display visual cues in the periphery,
each designed for a single specific use case only. A multi-purpose headset to
exclusively augment peripheral vision did not exist yet. We introduce MoPeDT:
Modular Peripheral Display Toolkit, a freely available, flexible,
reconfigurable, and extendable headset to conduct peripheral vision research.
MoPeDT can be built with a 3D printer and off-the-shelf components. It features
multiple spatially configurable near-eye display modules and full 3D tracking
inside and outside the lab. With our system, researchers and designers may
easily develop and prototype novel peripheral vision interaction and
visualization techniques. We demonstrate the versatility of our headset with
several possible applications for spatial awareness, balance, interaction,
feedback, and notifications. We conducted a small study to evaluate the
usability of the system. We found that participants were largely not irritated
by the peripheral cues, but the headset's comfort could be further improved. We
also evaluated our system based on established heuristics for human-computer
interaction toolkits to show how MoPeDT adapts to changing requirements, lowers
the entry barrier for peripheral vision research, and facilitates expressive
power in the combination of modular building blocks.Comment: Accepted IEEE VR 2023 conference pape
RadialLight: Exploring radial peripheral LEDs for directional cues in head-mounted displays
Current head-mounted displays (HMDs) for Virtual Reality (VR) and Augmented Reality (AR) have a limited field-of-view (FOV). This limited FOV further decreases the already restricted human visual range and amplifies the problem of objects going out of view. Therefore, we explore the utility of augmenting HMDs with RadialLight, a peripheral light display implemented as 18 radially positioned LEDs around each eye to cue direction towards out-of-view objects. We first investigated direction estimation accuracy of multi-colored cues presented on one versus two eyes. We then evaluated direction estimation accuracy and search time performance for locating out-of-view objects in two representative 360° video VR scenarios. Key findings show that participants could not distinguish between LED cues presented to one or both eyes simultaneously, participants estimated LED cue direction within a maximum 11.8° average deviation, and out-of-view objects in less distracting scenarios were selected faster. Furthermore, we provide implications for building peripheral HMDs
Tangible user interfaces : past, present and future directions
In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ďŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
Neuro-electronic technology in medicine and beyond
This dissertation looks at the technology and social issues involved with interfacing electronics directly to the human nervous system, in particular the methods for both reading and stimulating nerves. The development and use of cochlea implants is discussed, and is compared with recent developments in artificial vision. The final sections consider a future for non-medicinal applications of neuro-electronic technology. Social attitudes towards use for both medicinal and non-medicinal purposes are discussed, and the viability of use in the latter case assessed
I Am The Passenger: How Visual Motion Cues Can Influence Sickness For In-Car VR
This paper explores the use of VR Head Mounted Displays
(HMDs) in-car and in-motion for the first time. Immersive
HMDs are becoming everyday consumer items and, as they
offer new possibilities for entertainment and productivity, people
will want to use them during travel in, for example, autonomous
cars. However, their use is confounded by motion
sickness caused in-part by the restricted visual perception
of motion conflicting with physically perceived vehicle motion
(accelerations/rotations detected by the vestibular system).
Whilst VR HMDs restrict visual perception of motion, they
could also render it virtually, potentially alleviating sensory
conflict. To study this problem, we conducted the first on-road
and in motion study to systematically investigate the effects
of various visual presentations of the real-world motion of
a car on the sickness and immersion of VR HMD wearing
passengers. We established new baselines for VR in-car motion
sickness, and found that there is no one best presentation
with respect to balancing sickness and immersion. Instead,
user preferences suggest different solutions are required for
differently susceptible users to provide usable VR in-car. This
work provides formative insights for VR designers and an entry
point for further research into enabling use of VR HMDs,
and the rich experiences they offer, when travelling
Virtual Reality and Choreographic Practice:The Potential for New Creative Methods
Virtual reality (VR) is becoming an increasingly intriguing space for dancers and choreographers. Choreographers may find new possibility emerging in using virtual reality to create movement and the WhoLoDancE: Whole-Body Interaction Learning for Dance Education project is developing tools to assist in this process. The interdisciplinary team which includes dancers, choreographers, educators, artists, coders, technologists and system architects have collaborated in engaging, discussing, analysing, testing and working with end-users to help with thinking about the issues that emerge in the creation of these tools. The paper sets out to explore the creative potential of VR in the context of WhoLoDancE and how this may offer new insights for the choreographer and dancer. We pay attention to the virtual environment, the virtual performance and the virtual dancer as some of the key components for equipping the choreographer to use in the creating process and to inform the dancing body. The cyclical process of live body to virtual, back to the dancing body as a choreographic device is an innovative way to approach practice. This approach may lead to new insights and innovations in choreographic methods that may extend beyond the project and ultimately take dance performance in a new direction
Augmenting low-fidelity flight simulation training devices via amplified head rotations
Due to economic and operational constraints, there is an increasing demand from aviation operators and training manufacturers to extract maximum training usage from the lower fidelity suite of flight simulators. It is possible to augment low-fidelity flight simulators to achieve equivalent performance compared to high-fidelity setups but at reduced cost and greater mobility. In particular for visual manoeuvres, the virtual reality technique of head-tracking amplification for virtual view control enables full field-of-regard access even with limited field-of-view displays. This research quantified the effects of this technique on piloting performance, workload and simulator sickness by applying it to a fixed-base, low-fidelity, low-cost flight simulator. In two separate simulator trials, participants had to land a simulated aircraft from a visual traffic circuit pattern whilst scanning for airborne traffic.
Initially, a single augmented display was compared to the common triple display setup in front of the pilot. Starting from the base leg, pilots exhibited tighter turns closer to the desired ground track and were more actively conducting visual scans using the augmented display. This was followed up by a second experiment to quantify the scalability of augmentation towards larger displays and field of views. Task complexity was increased by starting the traffic pattern from the downwind leg. Triple displays in front of the pilot yielded the best compromise delivering flight performance and traffic detection scores just below the triple projectors but without an increase in track deviations and the pilots were also less prone to simulator sickness symptoms.
This research demonstrated that head augmentation yields clear benefits of quick user adaptation, low-cost, ease of systems integration, together with the capability to negate the impact of display sizes yet without incurring significant penalties in workload and incurring simulator sickness. The impact of this research is that it facilitates future flight training solutions using this augmentation technique to meet budgetary and mobility requirements. This enables deployment of simulators in large numbers to deliver expanded mission rehearsal previously unattainable within this class of low-fidelity simulators, and with no restrictions for transfer to other training media
- âŚ