1 research outputs found

    Extracting interpersonal stance from vocal signals

    No full text
    Item does not contain fulltextThe role of emotions and other affective states within Human-Computer Interaction (HCI) is gaining importance. Introducing affect into computer applications typically makes these systems more efficient, effective and enjoyable. This paper presents a model that is able to extract interpersonal stance from vocal signals. To achieve this, a dataset of 3840 sentences spoken by 20 semi-professional actors was built and was used to train and test a model based on Support Vector Machines (SVM). An analysis of the results indicates that there is much variation in the way people express interpersonal stance, which makes it difficult to build a generic model. Instead, the model shows good performance on the individual level (with accuracy above 80%). The implications of these findings for HCI systems are discussed.MA3HMI 2018: Fourth International Workshop on Multimodal Analysis enabling Artificial Agents in Human-Agent Interaction, Boulder, CO, USA, October 16 - 20, 201
    corecore