4,637 research outputs found

    Specific facial signals associate with categories of social actions conveyed through questions

    Get PDF
    The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker’s intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request “What time is it?”, an invitation “Will you come to my party?” or a criticism “Are you crazy?”). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction

    On the meaning potentials of pragmatic (micro-)gestures

    Get PDF
    According to their traditional definition, gestures are visible bodily actions that are intentional and meaningful in a communicative context (Kendon 2004). As such, they can be attributed with the following functions (Colletta et al. 2009): (i) a reference function (deictic or representational); (ii) a discourse-structuring function (e.g., beats or cohesive devices); (iii) an expressive function (oriented towards attitudes, mental states, stance or emotions); (iv) an interactive function (oriented towards the interlocutor and the regulation of speech). In contrast to representational gestures, the hypothesis is that non-representational gestures are visible bodily actions that are idiosyncratic, (mostly) unintentional and serving pragmatic purposes in language interaction. As such, they play a role similar to that of pragmatic markers in speech (Aijmer 2013): they are metalinguistic indicators of the speaker’s mental processes and, at the same time, help the addressee to build a meaningful holistic representation of the information conveyed. A particular attention will thus be paid here to non-representational spontaneous gestures, which act as emphasizing, mitigating or punctuating devices in language communication (called adaptors, beats, batons, or motor movements – see Ekman & Friesen 1969, McNeill 1992, Krauss et al. 2000). The following research questions will be addressed: How can we decide which nonverbal units must be accounted for to reach a better understanding of pragmatic competence in human-human interaction? To what extent is it possible (or even, necessary) to integrate non-representational gestures into a consistent model for the annotation of multimodal communication? The present study is part of the CorpAGEst project (2013-2015), which aims to establish the gestural and verbal profile of very old people, looking at their pragmatic competence from a naturalistic perspective. Within this context, a multimodal corpus has been created, which is comprised of 18 semi-directed, face-to-face interviews between an adult and a very old subject (9 subjects; 16.8 hrs; approx. 250,000 words). This corpus served as a basis for the annotation of nonverbal data (hand gestures, body gestures and facial expressions). Hand gestures were decomposed into phases and annotated in terms of physical parameters (configuration, orientation, movement and position) (Bressem 2008). Body gestures were annotated taking into account the following articulators: head, shoulders, arms, trunk, legs, and feet. It is worth noting that all potential meaningful units were identified as strokes in the first step of the annotation process, including micro-movements (Ex. 1) and activities (Ex. 2). In line with the MUMIN project (Allwood et al., 2004), facial expressions were identified according to their location in the face (eyebrow, eye movement, gaze, mouth, lips) and then annotated in terms of physiological features. They were also attributed with an emotion label recognized from the face (see Bolly, to appear in 2014). Preliminary results indicate that the use of nonverbal resources is highly idiosyncratic. For instance, it appeared from a functional analysis of hand gestures that the distribution is not homogeneous among the participants. In addition, focusing on physiological patterning from face and gaze expressions in one of the speaker’s speech, no clear physiological pattern seems to be emotion-specific. Some regularity has nevertheless been noticed for the most frequent emotions used (e.g., surprise is mainly expressed by means of eyebrow raising, often combined with an exaggerated opening of the eyes). This multimodal and multi-level approach will give new insight into the use of (non)verbal pragmatic markers in relation to the participants’ emotional and attitudinal behavior in intergenerational interaction

    The evolution of (proto-)language: Focus on mechanisms

    Get PDF
    info:eu-repo/semantics/publishedVersio

    D3.1 Instructional Designs for Real-time Feedback

    Get PDF
    The main objective of METALOGUE is to produce a multimodal dialogue system that is able to implement an interactive behaviour that seems natural to users and is flexible enough to exploit the full potential of multimodal interaction. The METALOGUE system will be arranged in the context of educational use-case scenarios, i.e. for training active citizens (Youth Parliament) and call centre employees. This deliverable describes the intended real-time feedback and reflection in-action support to support the training. Real-time feedback informs learners how they perform key skills and enables them to monitor their progress and thus reflect in-action. This deliverable examines the theoretical considerations of reflection in-action, what type of data is available and should be used, the timing and type of real-time feedback and, finally, concludes with an instructional design blueprint giving a global outline of a set of tasks with stepwise increasing complexity and the feedback proposed.The underlying research project is partly funded by the METALOGUE project. METALOGUE is a Seventh Framework Programme collaborative project funded by the European Commission, grant agreement number: 611073 (http://www.metalogue.eu)

    Introducing infants to referential events: A developmental study of maternal ostensive marking in French.

    No full text
    International audienceIt is well known that mothers give their infants lessons in conversational competence from an early age. This study considered how maternal gestures and prosody contribute to this developing competence. It examines how mothers use ostensive marking to point out common references at different stages of development. The corpus consisted of longitudinal observations of four mother-infant dyads during free play (infants aged 0;4 to 1;1), at three stages of sensorimotor development (III, IV, and V). Four dimensions of ostensive marking were considered: (1) the span of the marked utterance (holistic vs. local), (2) the communication channel used (gestural vs. prosodic), (3) the type of gestural marker (oriented, iconic, conventional, beats), and (4) the type of prosodic marker (emphasis, prosodic cliché, reinforced nuclear stress, focal accent). Although there was no clear change in the patterns of specific types of gestural or prosodic markers, the results showed that mothers adapt their gestures to the infant's processing level. Between stages III and V, they move from holistic to local and from gestural to prosodic marking. Stage IV appears to be an excellent period for observing the transition
    • 

    corecore