937 research outputs found
What Is the Gaze Behavior of Pedestrians in Interactions with an Automated Vehicle When They Do Not Understand Its Intentions?
Interactions between pedestrians and automated vehicles (AVs) will increase
significantly with the popularity of AV. However, pedestrians often have not
enough trust on the AVs , particularly when they are confused about an AV's
intention in a interaction. This study seeks to evaluate if pedestrians clearly
understand the driving intentions of AVs in interactions and presents
experimental research on the relationship between gaze behaviors of pedestrians
and their understanding of the intentions of the AV. The hypothesis
investigated in this study was that the less the pedestrian understands the
driving intentions of the AV, the longer the duration of their gazing behavior
will be. A pedestrian--vehicle interaction experiment was designed to verify
the proposed hypothesis. A robotic wheelchair was used as the manual driving
vehicle (MV) and AV for interacting with pedestrians while pedestrians' gaze
data and their subjective evaluation of the driving intentions were recorded.
The experimental results supported our hypothesis as there was a negative
correlation between the pedestrians' gaze duration on the AV and their
understanding of the driving intentions of the AV. Moreover, the gaze duration
of most of the pedestrians on the MV was shorter than that on an AV. Therefore,
we conclude with two recommendations to designers of external human-machine
interfaces (eHMI): (1) when a pedestrian is engaged in an interaction with an
AV, the driving intentions of the AV should be provided; (2) if the pedestrian
still gazes at the AV after the AV displays its driving intentions, the AV
should provide clearer information about its driving intentions.Comment: 10 pages, 10 figure
Agreeing to Cross: How Drivers and Pedestrians Communicate
The contribution of this paper is twofold. The first is a novel dataset for
studying behaviors of traffic participants while crossing. Our dataset contains
more than 650 samples of pedestrian behaviors in various street configurations
and weather conditions. These examples were selected from approx. 240 hours of
driving in the city, suburban and urban roads. The second contribution is an
analysis of our data from the point of view of joint attention. We identify
what types of non-verbal communication cues road users use at the point of
crossing, their responses, and under what circumstances the crossing event
takes place. It was found that in more than 90% of the cases pedestrians gaze
at the approaching cars prior to crossing in non-signalized crosswalks. The
crossing action, however, depends on additional factors such as time to
collision (TTC), explicit driver's reaction or structure of the crosswalk.Comment: 6 pages, 6 figure
Autonomous Vehicles Drive into Shared Spaces: eHMI Design Concept Focusing on Vulnerable Road Users
In comparison to conventional traffic designs, shared spaces promote a more
pleasant urban environment with slower motorized movement, smoother traffic,
and less congestion. In the foreseeable future, shared spaces will be populated
with a mixture of autonomous vehicles (AVs) and vulnerable road users (VRUs)
like pedestrians and cyclists. However, a driver-less AV lacks a way to
communicate with the VRUs when they have to reach an agreement of a
negotiation, which brings new challenges to the safety and smoothness of the
traffic. To find a feasible solution to integrating AVs seamlessly into
shared-space traffic, we first identified the possible issues that the
shared-space designs have not considered for the role of AVs. Then an online
questionnaire was used to ask participants about how they would like a driver
of the manually driving vehicle to communicate with VRUs in a shared space. We
found that when the driver wanted to give some suggestions to the VRUs in a
negotiation, participants thought that the communications via the driver's body
behaviors were necessary. Besides, when the driver conveyed information about
her/his intentions and cautions to the VRUs, participants selected different
communication methods with respect to their transport modes (as a driver,
pedestrian, or cyclist). These results suggest that novel eHMIs might be useful
for AV-VRU communication when the original drivers are not present. Hence, a
potential eHMI design concept was proposed for different VRUs to meet their
various expectations. In the end, we further discussed the effects of the eHMIs
on improving the sociality in shared spaces and the autonomous driving systems
Importance of Instruction for Pedestrian-Automated Driving Vehicle Interaction with an External Human Machine Interface: Effects on Pedestrians' Situation Awareness, Trust, Perceived Risks and Decision Making
Compared to a manual driving vehicle (MV), an automated driving vehicle lacks
a way to communicate with the pedestrian through the driver when it interacts
with the pedestrian because the driver usually does not participate in driving
tasks. Thus, an external human machine interface (eHMI) can be viewed as a
novel explicit communication method for providing driving intentions of an
automated driving vehicle (AV) to pedestrians when they need to negotiate in an
interaction, e.g., an encountering scene. However, the eHMI may not guarantee
that the pedestrians will fully recognize the intention of the AV. In this
paper, we propose that the instruction of the eHMI's rationale can help
pedestrians correctly understand the driving intentions and predict the
behavior of the AV, and thus their subjective feelings (i.e., dangerous
feeling, trust in the AV, and feeling of relief) and decision-making are also
improved. The results of an interaction experiment in a road-crossing scene
indicate that the participants were more difficult to be aware of the situation
when they encountered an AV w/o eHMI compared to when they encountered an MV;
further, the participants' subjective feelings and hesitation in
decision-making also deteriorated significantly. When the eHMI was used in the
AV, the situational awareness, subjective feelings and decision-making of the
participants regarding the AV w/ eHMI were improved. After the instruction, it
was easier for the participants to understand the driving intention and predict
driving behavior of the AV w/ eHMI. Further, the subjective feelings and the
hesitation related to decision-making were improved and reached the same
standards as that for the MV.Comment: 5 figures, Accepted by IEEE IV202
Virtual Reality based Study to Analyse Pedestrian Attitude towards Autonomous Vehicles
What are pedestrian attitudes towards driverless vehicles that have no human driver? In this paper, we use virtual reality to simulate a virtual scene where pedestrians interact with driverless vehicles. This was an exploratory study where 15 users encounter a driverless vehicle at a crosswalk in the virtual scene. Data was collected in the form of video and audio recordings, semi-structured interview and participant sketches to explain the crosswalk scenes they experience. An interaction design framework for vehicle-pedestrian interaction in an autonomous vehicle has been suggested which can be used to design and model driverless vehicle behaviour before the autonomous vehicle technology is deployed widely
- …