9,832 research outputs found
What Is the Gaze Behavior of Pedestrians in Interactions with an Automated Vehicle When They Do Not Understand Its Intentions?
Interactions between pedestrians and automated vehicles (AVs) will increase
significantly with the popularity of AV. However, pedestrians often have not
enough trust on the AVs , particularly when they are confused about an AV's
intention in a interaction. This study seeks to evaluate if pedestrians clearly
understand the driving intentions of AVs in interactions and presents
experimental research on the relationship between gaze behaviors of pedestrians
and their understanding of the intentions of the AV. The hypothesis
investigated in this study was that the less the pedestrian understands the
driving intentions of the AV, the longer the duration of their gazing behavior
will be. A pedestrian--vehicle interaction experiment was designed to verify
the proposed hypothesis. A robotic wheelchair was used as the manual driving
vehicle (MV) and AV for interacting with pedestrians while pedestrians' gaze
data and their subjective evaluation of the driving intentions were recorded.
The experimental results supported our hypothesis as there was a negative
correlation between the pedestrians' gaze duration on the AV and their
understanding of the driving intentions of the AV. Moreover, the gaze duration
of most of the pedestrians on the MV was shorter than that on an AV. Therefore,
we conclude with two recommendations to designers of external human-machine
interfaces (eHMI): (1) when a pedestrian is engaged in an interaction with an
AV, the driving intentions of the AV should be provided; (2) if the pedestrian
still gazes at the AV after the AV displays its driving intentions, the AV
should provide clearer information about its driving intentions.Comment: 10 pages, 10 figure
Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload
Powered wheelchair users often struggle to drive safely and effectively and in more critical cases can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists the user as and when they require help. The system uses a multiple–hypotheses method to predict the driver’s intentions and if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance, but, perhaps more importantly, we characterise the user performance, in an experiment that combines eye–tracking with a secondary task. Without assistance, participants experienced multiple collisions whilst driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely, but they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain–machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input
Analyzing the Impact of Cognitive Load in Evaluating Gaze-based Typing
Gaze-based virtual keyboards provide an effective interface for text entry by
eye movements. The efficiency and usability of these keyboards have
traditionally been evaluated with conventional text entry performance measures
such as words per minute, keystrokes per character, backspace usage, etc.
However, in comparison to the traditional text entry approaches, gaze-based
typing involves natural eye movements that are highly correlated with human
brain cognition. Employing eye gaze as an input could lead to excessive mental
demand, and in this work we argue the need to include cognitive load as an eye
typing evaluation measure. We evaluate three variations of gaze-based virtual
keyboards, which implement variable designs in terms of word suggestion
positioning. The conventional text entry metrics indicate no significant
difference in the performance of the different keyboard designs. However, STFT
(Short-time Fourier Transform) based analysis of EEG signals indicate variances
in the mental workload of participants while interacting with these designs.
Moreover, the EEG analysis provides insights into the user's cognition
variation for different typing phases and intervals, which should be considered
in order to improve eye typing usability.Comment: 6 pages, 4 figures, IEEE CBMS 201
Reference Resolution in Multi-modal Interaction: Position paper
In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply more than one modality in conveying his or her message to the environment in which a computer detects and interprets signals from different modalities. We show some naturally arising problems and how they are treated for different contexts. No generally applicable solutions are given
A motion system for social and animated robots
This paper presents an innovative motion system that is used to control the motions and animations of a social robot. The social robot Probo is used to study Human-Robot Interactions (HRI), with a special focus on Robot Assisted Therapy (RAT). When used for therapy it is important that a social robot is able to create an "illusion of life" so as to become a believable character that can communicate with humans. The design of the motion system in this paper is based on insights from the animation industry. It combines operator-controlled animations with low-level autonomous reactions such as attention and emotional state. The motion system has a Combination Engine, which combines motion commands that are triggered by a human operator with motions that originate from different units of the cognitive control architecture of the robot. This results in an interactive robot that seems alive and has a certain degree of "likeability". The Godspeed Questionnaire Series is used to evaluate the animacy and likeability of the robot in China, Romania and Belgium
Multimodal Polynomial Fusion for Detecting Driver Distraction
Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone.
Although there has been a considerable amount of research on modeling the
distracted behavior of drivers under various conditions, accurate automatic
detection using multiple modalities and especially the contribution of using
the speech modality to improve accuracy has received little attention. This
paper introduces a new multimodal dataset for distracted driving behavior and
discusses automatic distraction detection using features from three modalities:
facial expression, speech and car signals. Detailed multimodal feature analysis
shows that adding more modalities monotonically increases the predictive
accuracy of the model. Finally, a simple and effective multimodal fusion
technique using a polynomial fusion layer shows superior distraction detection
results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201
- …