30,206 research outputs found
Evaluation of Haptic Patterns on a Steering Wheel
Infotainment Systems can increase mental workload and divert
visual attention away from looking ahead on the roads.
When these systems give information to the driver, provide
it through the tactile channel on the steering, it wheel might
improve driving behaviour and safety. This paper describes an
investigation into the perceivability of haptic feedback patterns
using an actuated surface on a steering wheel. Six solenoids
were embedded along the rim of the steering wheel creating
three bumps under each palm. Maximally, four of the six
solenoids were actuated simultaneously, resulting in 56 patterns
to test. Participants were asked to keep in the middle
road of the driving simulator as good as possible. Overall
recognition accuracy of the haptic patterns was 81.3%, where
identification rate increased with decreasing number of active
solenoids (up to 92.2% for a single solenoid). There was no
significant increase in lane deviation or steering angle during
haptic pattern presentation. These results suggest that drivers
can reliably distinguish between cutaneous patterns presented
on the steering wheel. Our findings can assist in delivering
non-critical messages to the driver (e.g. driving performance,
incoming text messages, etc.) without decreasing driving performance
or increasing perceived mental workload
An Empirical Evaluation On Vibrotactile Feedback For Wristband System
With the rapid development of mobile computing, wearable wrist-worn is
becoming more and more popular. But the current vibrotactile feedback patterns
of most wrist-worn devices are too simple to enable effective interaction in
nonvisual scenarios. In this paper, we propose the wristband system with four
vibrating motors placed in different positions in the wristband, providing
multiple vibration patterns to transmit multi-semantic information for users in
eyes-free scenarios. However, we just applied five vibrotactile patterns in
experiments (positional up and down, horizontal diagonal, clockwise circular,
and total vibration) after contrastive analyzing nine patterns in a pilot
experiment. The two experiments with the same 12 participants perform the same
experimental process in lab and outdoors. According to the experimental
results, users can effectively distinguish the five patterns both in lab and
outside, with approximately 90% accuracy (except clockwise circular vibration
of outside experiment), proving these five vibration patterns can be used to
output multi-semantic information. The system can be applied to eyes-free
interaction scenarios for wrist-worn devices.Comment: 10 pages
Mid-air haptic rendering of 2D geometric shapes with a dynamic tactile pointer
An important challenge that affects ultrasonic midair haptics, in contrast to physical touch, is that we lose certain exploratory procedures such as contour following. This makes the task of perceiving geometric properties and shape identification more difficult. Meanwhile, the growing interest in mid-air haptics and their application to various new areas requires an improved understanding of how we perceive specific haptic stimuli, such as icons and control dials in mid-air. We address this challenge
by investigating static and dynamic methods of displaying 2D geometric shapes in mid-air. We display a circle, a square, and a triangle, in either a static or dynamic condition, using ultrasonic mid-air haptics. In the static condition, the shapes are presented as a full outline in mid-air, while in the dynamic condition, a tactile pointer is moved around the perimeter of the shapes. We measure participants’ accuracy and confidence of identifying
shapes in two controlled experiments (n1 = 34, n2 = 25). Results reveal that in the dynamic condition people recognise shapes significantly more accurately, and with higher confidence. We also find that representing polygons as a set of individually drawn haptic strokes, with a short pause at the corners, drastically enhances shape recognition accuracy. Our research supports the design of mid-air haptic user interfaces in application scenarios
such as in-car interactions or assistive technology in education
Learning to Represent Haptic Feedback for Partially-Observable Tasks
The sense of touch, being the earliest sensory system to develop in a human
body [1], plays a critical part of our daily interaction with the environment.
In order to successfully complete a task, many manipulation interactions
require incorporating haptic feedback. However, manually designing a feedback
mechanism can be extremely challenging. In this work, we consider manipulation
tasks that need to incorporate tactile sensor feedback in order to modify a
provided nominal plan. To incorporate partial observation, we present a new
framework that models the task as a partially observable Markov decision
process (POMDP) and learns an appropriate representation of haptic feedback
which can serve as the state for a POMDP model. The model, that is parametrized
by deep recurrent neural networks, utilizes variational Bayes methods to
optimize the approximate posterior. Finally, we build on deep Q-learning to be
able to select the optimal action in each state without access to a simulator.
We test our model on a PR2 robot for multiple tasks of turning a knob until it
clicks.Comment: IEEE International Conference on Robotics and Automation (ICRA), 201
Evaluating Multimodal Driver Displays of Varying Urgency
Previous studies have evaluated Audio, Visual and Tactile warnings for drivers, highlighting the importance of conveying the appropriate level of urgency through the signals. However, these modalities have never been combined exhaustively with different urgency levels and tested while using a driving simulator. This paper describes two experiments investigating all multimodal combinations of such warnings along three different levels of designed urgency. The warnings were first evaluated in terms of perceived urgency and perceived annoyance in the context of a driving simulator. The results showed that the perceived urgency matched the designed urgency of the warnings. More urgent warnings were also rated as more annoying but the effect of annoyance was lower compared to urgency. The warnings were then tested for recognition time when presented during a simulated driving task. It was found that warnings of high urgency induced quicker and more accurate responses than warnings of medium and of low urgency. In both studies, the number of modalities used in warnings (one, two or three) affected both subjective and objective responses. More modalities led to higher ratings of urgency and annoyance, with annoyance having a lower effect compared to urgency. More modalities also led to quicker responses. These results provide implications for multimodal warning design and reveal how modalities and modality combinations can influence participant responses during a simulated driving task
- …