2,028 research outputs found

    Addressee Identification In Face-to-Face Meetings

    Get PDF
    We present results on addressee identification in four-participants face-to-face meetings using Bayesian Network and Naive Bayes classifiers. First, we investigate how well the addressee of a dialogue act can be predicted based on gaze, utterance and conversational context features. Then, we explore whether information about meeting context can aid classifiers’ performances. Both classifiers perform the best when conversational context and utterance features are combined with speaker’s gaze information. The classifiers show little gain from information about meeting context

    A comparison of addressee detection methods for multiparty conversations

    Get PDF
    Several algorithms have recently been proposed for recognizing addressees in a group conversational setting. These algorithms can rely on a variety of factors including previous conversational roles, gaze and type of dialogue act. Both statistical supervised machine learning algorithms as well as rule based methods have been developed. In this paper, we compare several algorithms developed for several different genres of muliparty dialogue, and propose a new synthesis algorithm that matches the performance of machine learning algorithms while maintaning the transparancy of semantically meaningfull rule-based algorithms

    Are you being addressed?: real-time addressee detection to support remote participants in hybrid meetings

    Get PDF
    A meeting assistant agent for (remote) participants in hybrid meetings has been developed. Its task is to monitor the meeting conversation and notify the user when he is being addressed. This paper presents the experiments that have been performed to develop machine classifiers to decide if “You are being addressed��? where “You��? refers to a fixed (remote) participant in a meeting. The experimental results back up the choices made regarding the selection of data, features, and classification methods. We discuss variations of the addressee classification problem that have been considered in the literature and how suitable they are for addressing detection in a system that plays a role in a live meeting

    Exploiting `Subjective' Annotations

    Get PDF
    Many interesting phenomena in conversation can only be annotated as a subjective task, requiring interpretative judgements from annotators. This leads to data which is annotated with lower levels of agreement not only due to errors in the annotation, but also due to the differences in how annotators interpret conversations. This paper constitutes an attempt to find out how subjective annotations with a low level of agreement can profitably be used for machine learning purposes. We analyse the (dis)agreements between annotators for two different cases in a multimodal annotated corpus and explicitly relate the results to the way machine-learning algorithms perform on the annotated data. Finally we present two new concepts, namely `subjective entity' classifiers resp. `consensus objective' classifiers, and give recommendations for using subjective data in machine-learning applications.\u

    A corpus for studying addressing behaviour in multi-party dialogues

    Get PDF
    This paper describes a multi-modal corpus of hand-annotated meeting dialogues that was designed for studying addressing behaviour in face-to-face conversations. The corpus contains annotated dialogue acts, addressees, adjacency pairs and gaze direction. First, we describe the corpus design where we present the meetings collection, annotation scheme and annotation tools. Then, we present the\ud analysis of the reproducibility and stability of the annotation scheme

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior.\ud As an example of social perception research in the VMR, we describe an experiment to assess human observers’ accuracy for head orientation

    Twente Debate Corpus - A Multimodal Corpus for Head Movement Analysis

    Get PDF
    This paper introduces a multimodal discussion corpus for the study into head movement and turn-taking patterns in debates. Given that participants either acted alone or in a pair, cooperation and competition and their nonverbal correlates can be analyzed. In addition to the video and audio of the recordings, the corpus contains automatically estimated head movements, and manual annotations of who is speaking and who is looking where. The corpus consists of over 2 hours of debates, in 6 groups with 18 participants in total. We describe the recording setup and present initial analyses of the recorded data. We found that the person who acted as single debater speaks more and also receives more attention compared to the other debaters, also when corrected for the time speaking.We also found that a single debater was more likely to speak after a team debater. Future work will be aimed at further analysis of the relation between speaking and looking patterns, the outcome of the debate and perceived dominance of the debaters

    Uncommon ground: the distribution of dialogue contexts

    Get PDF
    PhDContext in dialogue is at once regarded as a set of resources enabling successful interpretation and is altered by such interpretations. A key problem for models of dialogue, then, is to specify how the shared context evolves. However, these models have been developed mainly to account for the way context is built up through direct interaction between pairs of participants. In multi-party dialogue, patterns of direct interaction between participants are often more unevenly distributed. This thesis explores the effects of this characteristic on the development of shared contexts. A corpus analysis of ellipsis shows that side-participants can reach the same level of grounding as speaker and addressee. Such dialogues result in collective contexts that are not reducible to their component dyadic interactions. It is proposed that this is characteristic of dialogues in which a subgroup of the participants are organised into a party, who act as a unified aggregate to carry the conversation forward. Accordingly, the contextual increments arising from a dialogue move from one party member can affect the party as a whole. Grounding, like turn-taking, can therefore operate between parties rather than individuals. An experimental test of this idea is presented which provides evidence for the practical reality of parties. Two further experiments explore the impact of party membership on the accessibility of context. The results indicate that participants who, for a stretch of talk, fall inside and outside of the interacting parties, effect divergent contextual increments. This is evidence for the emergence of distinct dialogue contexts in the same conversation. Finally, it is argued that these findings present significant challenges for how formal models of dialogue deal with individual contributions. In particular, they point to the need for such models to index the resulting contextual increments to specific subsets of the participant
    • …
    corecore