1,202 research outputs found
Mid-air haptic rendering of 2D geometric shapes with a dynamic tactile pointer
An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge
by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participants’ accuracy and confidence of identifying
shapes in two controlled experiments (n1 = 34, n2 = 25). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios
such as in-car interactions or assistive technology in education
Recommended from our members
Performance Envelopes of Virtual Keyboard Text Input Strategies in Virtual Reality
Virtual and Augmented Reality deliver engaging interaction experiences that can transport and extend the capabilities of the user. To ensure these paradigms are more broadly usable and effective, however, it is necessary to also deliver many of the conventional functions of a smartphone or personal computer. It remains unclear how conventional input tasks, such as text entry, can best be translated into virtual and augmented reality. In this paper we examine the performance potential of four alternative text entry strategies in virtual reality (VR). These four strategies are selected to provide full coverage of two fundamental design dimensions: i) physical surface association; and ii) number of engaged fingers. Specifically, we examine typing with index fingers on a surface and in mid-air and typing using all ten fingers on a surface and in mid-air. The central objective is to evaluate the human performance potential of these four typing strategies without being constrained by current tracking and statistical text decoding limitations. To this end we introduce an auto-correction simulator that uses knowledge of the stimulus to emulate statistical text decoding within constrained experimental parameters and use high-precision motion tracking hardware to visualise and detect fingertip interactions. We find that alignment of the virtual keyboard with a physical surface delivers significantly faster entry rates over a mid-air keyboard. Also, users overwhelmingly fail to effectively engage all ten fingers in mid-air typing, resulting in slower entry rates and higher error rates compared to just using two index fingers. In addition to identifying the envelopes of human performance for the four strategies investigated, we also provide a detailed analysis of the underlying features that distinguish each strategy in terms of its performance and behaviour.This work was supported by Facebook Reality Labs and by EPSRC (grant EP/R004471/1)
Comparing Hand Gestures and a Gamepad Interface for Locomotion in Virtual Environments
Hand gesture is a new and promising interface for locomotion in virtual
environments. While several previous studies have proposed different hand
gestures for virtual locomotion, little is known about their differences in
terms of performance and user preference in virtual locomotion tasks. In the
present paper, we presented three different hand gesture interfaces and their
algorithms for locomotion, which are called the Finger Distance gesture, the
Finger Number gesture and the Finger Tapping gesture. These gestures were
inspired by previous studies of gesture-based locomotion interfaces and are
typical gestures that people are familiar with in their daily lives.
Implementing these hand gesture interfaces in the present study enabled us to
systematically compare the differences between these gestures. In addition, to
compare the usability of these gestures to locomotion interfaces using
gamepads, we also designed and implemented a gamepad interface based on the
Xbox One controller. We conducted empirical studies to compare these four
interfaces through two virtual locomotion tasks. A desktop setup was used
instead of sharing a head-mounted display among participants due to the concern
of the Covid-19 situation. Through these tasks, we assessed the performance and
user preference of these interfaces on speed control and waypoints navigation.
Results showed that user preference and performance of the Finger Distance
gesture were close to that of the gamepad interface. The Finger Number gesture
also had close performance and user preference to that of the Finger Distance
gesture. Our study demonstrates that the Finger Distance gesture and the Finger
Number gesture are very promising interfaces for virtual locomotion. We also
discuss that the Finger Tapping gesture needs further improvements before it
can be used for virtual walking
Electrotactile feedback applications for hand and arm interactions: A systematic review, meta-analysis, and future directions
Haptic feedback is critical in a broad range of
human-machine/computer-interaction applications. However, the high cost and low
portability/wearability of haptic devices remain unresolved issues, severely
limiting the adoption of this otherwise promising technology. Electrotactile
interfaces have the advantage of being more portable and wearable due to their
reduced actuators' size, as well as their lower power consumption and
manufacturing cost. The applications of electrotactile feedback have been
explored in human-computer interaction and human-machine-interaction for
facilitating hand-based interactions in applications such as prosthetics,
virtual reality, robotic teleoperation, surface haptics, portable devices, and
rehabilitation. This paper presents a technological overview of electrotactile
feedback, as well a systematic review and meta-analysis of its applications for
hand-based interactions. We discuss the different electrotactile systems
according to the type of application. We also discuss over a quantitative
congregation of the findings, to offer a high-level overview into the
state-of-art and suggest future directions. Electrotactile feedback systems
showed increased portability/wearability, and they were successful in rendering
and/or augmenting most tactile sensations, eliciting perceptual processes, and
improving performance in many scenarios. However, knowledge gaps (e.g.,
embodiment), technical (e.g., recurrent calibration, electrodes' durability)
and methodological (e.g., sample size) drawbacks were detected, which should be
addressed in future studies.Comment: 18 pages, 1 table, 8 figures, under review in Transactions on
Haptics. This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible.Upon acceptance of the article by IEEE, the preprint
article will be replaced with the accepted versio
Interaction Methods for Smart Glasses : A Survey
Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe
Haptics: Science, Technology, Applications
This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility
- …