1 research outputs found
Multimodal Dialogue Management for Multiparty Interaction with Infants
We present dialogue management routines for a system to engage in multiparty
agent-infant interaction. The ultimate purpose of this research is to help
infants learn a visual sign language by engaging them in naturalistic and
socially contingent conversations during an early-life critical period for
language development (ages 6 to 12 months) as initiated by an artificial agent.
As a first step, we focus on creating and maintaining agent-infant engagement
that elicits appropriate and socially contingent responses from the baby. Our
system includes two agents, a physical robot and an animated virtual human. The
system's multimodal perception includes an eye-tracker (measures attention) and
a thermal infrared imaging camera (measures patterns of emotional arousal). A
dialogue policy is presented that selects individual actions and planned
multiparty sequences based on perceptual inputs about the baby's internal
changing states of emotional engagement. The present version of the system was
evaluated in interaction with 8 babies. All babies demonstrated spontaneous and
sustained engagement with the agents for several minutes, with patterns of
conversationally relevant and socially contingent behaviors. We further
performed a detailed case-study analysis with annotation of all agent and baby
behaviors. Results show that the baby's behaviors were generally relevant to
agent conversations and contained direct evidence for socially contingent
responses by the baby to specific linguistic samples produced by the avatar.
This work demonstrates the potential for language learning from agents in very
young babies and has especially broad implications regarding the use of
artificial agents with babies who have minimal language exposure in early life