24,267 research outputs found
Mid-air haptic rendering of 2D geometric shapes with a dynamic tactile pointer
An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge
by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participants’ accuracy and confidence of identifying
shapes in two controlled experiments (n1 = 34, n2 = 25). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios
such as in-car interactions or assistive technology in education
Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems
When users want to interact with an in-air gesture system, they
must first address it. This involves finding where to gesture
so that their actions can be sensed, and how to direct their
input towards that system so that they do not also affect others
or cause unwanted effects. This is an important problem [6]
which lacks a practical solution. We present an interaction
technique which uses multimodal feedback to help users address
in-air gesture systems. The feedback tells them how
(“do that”) and where (“there”) to gesture, using light, audio
and tactile displays. By doing that there, users can direct their
input to the system they wish to interact with, in a place where
their gestures can be sensed. We discuss the design of our
technique and three experiments investigating its use, finding
that users can “do that” well (93.2%–99.9%) while accurately
(51mm–80mm) and quickly (3.7s) finding “there”
LeviSense: a platform for the multisensory integration in levitating food and insights into its effect on flavour perception
Eating is one of the most multisensory experiences in everyday life. All of our five senses (i.e. taste, smell, vision, hearing and touch) are involved, even if we are not aware of it. However, while multisensory integration has been well studied in psychology, there is not a single platform for testing systematically the effects of different stimuli. This lack of platform results in unresolved design challenges for the design of taste-based immersive experiences. Here, we present LeviSense: the first system designed for multisensory integration in gustatory experiences based on levitated food. Our system enables the systematic exploration of different sensory effects on eating experiences. It also opens up new opportunities for other professionals (e.g., molecular gastronomy chefs) looking for innovative taste-delivery platforms. We describe the design process behind LeviSense and conduct two experiments to test a subset of the crossmodal combinations (i.e., taste and vision, taste and smell). Our results show how different lighting and smell conditions affect the perceived taste intensity, pleasantness, and satisfaction. We discuss how LeviSense creates a new technical, creative, and expressive possibilities in a series of emerging design spaces within Human-Food Interaction
EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays
While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays
Seamless and Secure VR: Adapting and Evaluating Established Authentication Systems for Virtual Reality
Virtual reality (VR) headsets are enabling a wide range of new
opportunities for the user. For example, in the near future users
may be able to visit virtual shopping malls and virtually join
international conferences. These and many other scenarios pose
new questions with regards to privacy and security, in particular
authentication of users within the virtual environment. As a first
step towards seamless VR authentication, this paper investigates
the direct transfer of well-established concepts (PIN, Android
unlock patterns) into VR. In a pilot study (N = 5) and a lab
study (N = 25), we adapted existing mechanisms and evaluated
their usability and security for VR. The results indicate that
both PINs and patterns are well suited for authentication in
VR. We found that the usability of both methods matched the
performance known from the physical world. In addition, the
private visual channel makes authentication harder to observe,
indicating that authentication in VR using traditional concepts
already achieves a good balance in the trade-off between usability
and security. The paper contributes to a better understanding of
authentication within VR environments, by providing the first
investigation of established authentication methods within VR,
and presents the base layer for the design of future authentication
schemes, which are used in VR environments only
Recommended from our members
MistForm: adaptive shape changing fog screens
We present MistForm, a shape changing fog display that can support one or two users interacting with either 2D or 3D content. Mistform combines affordances from both shape changing interfaces and mid-air displays. For example, a concave display can maintain content in comfortable reach for a single user, while a convex shape can support several users engaged on individual tasks. MistForm also enables unique interaction possibilities by exploiting the synergies between shape changing interfaces and mid-air fog displays. For instance, moving the screen will affect the brightness and blurriness of the screen at specific locations around the display, creating spaces with similar (collaboration) or different visibility (personalized content). We describe the design of MistForm and analyse its inherent challenges, such as image distortion and uneven brightness on dynamic curved surfaces. We provide a machine learning approach to characterize the shape of the screen and a rendering algorithm to remove aberrations. We finally explore novel interactive possibilities and reflect on their potential and limitations
- …