1,193 research outputs found
Saliency difference based objective evaluation method for a superimposed screen of the HUD with various background
The head-up display (HUD) is an emerging device which can project information
on a transparent screen. The HUD has been used in airplanes and vehicles, and
it is usually placed in front of the operator's view. In the case of the
vehicle, the driver can see not only various information on the HUD but also
the backgrounds (driving environment) through the HUD. However, the projected
information on the HUD may interfere with the colors in the background because
the HUD is transparent. For example, a red message on the HUD will be less
noticeable when there is an overlap between it and the red brake light from the
front vehicle. As the first step to solve this issue, how to evaluate the
mutual interference between the information on the HUD and backgrounds is
important. Therefore, this paper proposes a method to evaluate the mutual
interference based on saliency. It can be evaluated by comparing the HUD part
cut from a saliency map of a measured image with the HUD image.Comment: 10 pages, 5 fighres, 1 table, accepted by IFAC-HMS 201
What Is the Gaze Behavior of Pedestrians in Interactions with an Automated Vehicle When They Do Not Understand Its Intentions?
Interactions between pedestrians and automated vehicles (AVs) will increase
significantly with the popularity of AV. However, pedestrians often have not
enough trust on the AVs , particularly when they are confused about an AV's
intention in a interaction. This study seeks to evaluate if pedestrians clearly
understand the driving intentions of AVs in interactions and presents
experimental research on the relationship between gaze behaviors of pedestrians
and their understanding of the intentions of the AV. The hypothesis
investigated in this study was that the less the pedestrian understands the
driving intentions of the AV, the longer the duration of their gazing behavior
will be. A pedestrian--vehicle interaction experiment was designed to verify
the proposed hypothesis. A robotic wheelchair was used as the manual driving
vehicle (MV) and AV for interacting with pedestrians while pedestrians' gaze
data and their subjective evaluation of the driving intentions were recorded.
The experimental results supported our hypothesis as there was a negative
correlation between the pedestrians' gaze duration on the AV and their
understanding of the driving intentions of the AV. Moreover, the gaze duration
of most of the pedestrians on the MV was shorter than that on an AV. Therefore,
we conclude with two recommendations to designers of external human-machine
interfaces (eHMI): (1) when a pedestrian is engaged in an interaction with an
AV, the driving intentions of the AV should be provided; (2) if the pedestrian
still gazes at the AV after the AV displays its driving intentions, the AV
should provide clearer information about its driving intentions.Comment: 10 pages, 10 figure
Protein-Protein Interaction Prediction by using Amino Acids Interaction Pattern in Rigid-Body Docking Process
- …