1 research outputs found
Audio-Visual Understanding of Passenger Intents for In-Cabin Conversational Agents
Building multimodal dialogue understanding capabilities situated in the
in-cabin context is crucial to enhance passenger comfort in autonomous vehicle
(AV) interaction systems. To this end, understanding passenger intents from
spoken interactions and vehicle vision systems is a crucial component for
developing contextual and visually grounded conversational agents for AV.
Towards this goal, we explore AMIE (Automated-vehicle Multimodal In-cabin
Experience), the in-cabin agent responsible for handling multimodal
passenger-vehicle interactions. In this work, we discuss the benefits of a
multimodal understanding of in-cabin utterances by incorporating
verbal/language input together with the non-verbal/acoustic and visual clues
from inside and outside the vehicle. Our experimental results outperformed
text-only baselines as we achieved improved performances for intent detection
with a multimodal approach.Comment: ACL 2020 - Second Grand-Challenge and Workshop on Multimodal Language
(Challenge-HML