3 research outputs found
A Framework of Personality Cues for Conversational Agents
Conversational agents (CAs)—software systems emulating conversations with humans through natural language—reshape our communication environment. As CAs have been widely used for applications requiring human-like interactions, a key goal in information systems (IS) research and practice is to be able to create CAs that exhibit a particular personality. However, existing research on CA personality is scattered across different fields and researchers and practitioners face difficulty in understanding the current state of the art on the design of CA personality. To address this gap, we systematically analyze existing studies and develop a framework on how to imbue CAs with personality cues and how to organize the underlying range of expressive variation regarding the Big Five personality traits. Our framework contributes to IS research by providing an overview of CA personality cues in verbal and non-verbal language and supports practitioners in designing CAs with a particular personality
Understanding the Predictability of Gesture Parameters from Speech and their Perceptual Importance
Gesture behavior is a natural part of human conversation. Much work has
focused on removing the need for tedious hand-animation to create embodied
conversational agents by designing speech-driven gesture generators. However,
these generators often work in a black-box manner, assuming a general
relationship between input speech and output motion. As their success remains
limited, we investigate in more detail how speech may relate to different
aspects of gesture motion. We determine a number of parameters characterizing
gesture, such as speed and gesture size, and explore their relationship to the
speech signal in a two-fold manner. First, we train multiple recurrent networks
to predict the gesture parameters from speech to understand how well gesture
attributes can be modeled from speech alone. We find that gesture parameters
can be partially predicted from speech, and some parameters, such as path
length, being predicted more accurately than others, like velocity. Second, we
design a perceptual study to assess the importance of each gesture parameter
for producing motion that people perceive as appropriate for the speech.
Results show that a degradation in any parameter was viewed negatively, but
some changes, such as hand shape, are more impactful than others. A video
summarization can be found at https://youtu.be/aw6-_5kmLjY.Comment: To be published in the Proceedings of the 20th ACM International
Conference on Intelligent Virtual Agents (IVA 20