8 research outputs found

    Does information-structural acoustic prosody change under different visibility conditions?

    Get PDF
    Wagner P, Bryhadyr N, Schröer M, Ludusan B. Does information-structural acoustic prosody change under different visibility conditions? In: Proceedings of ICPhS. 2019.It is well-known that the effort invested in prosodic expression can be adjusted to the information structure in a message, but also to the characteristics of the transmission channel. To investigate wether visibly accessible cues to information structure or facial prosodic expression have a differentiated impact on acoustic prosody, we modified the visibility conditions in a spontaneous dyadic interaction task, i.e. a verbalized version of TicTacToe. The main hypothesis was that visibly accessible cues should lead to a decrease in prosodic effort. While we found that - as expected - information structure is expressed throughout a number of acoustic-prosodic cues, visible accessibility to context information makes accents shorter, while accessability to an interlocutor's facial expression slightly increases the mean F0 of an accent

    Pitch Accent Trajectories across Different Conditions of Visibility and Information Structure - Evidence from Spontaneous Dyadic Interaction

    Get PDF
    Wagner P, Bryhadyr N, Schröer M. Pitch Accent Trajectories across Different Conditions of Visibility and Information Structure - Evidence from Spontaneous Dyadic Interaction. In: Proceedings of Interspeech. 2019.Previous research identified a differential contribution of information structure and the visibility of facial and contextual information to the acoustic-prosodic expression of pitch accents. However, it is unclear whether pitch accent shapes are affected by these conditions as well. To investigate whether varying context cues have a differentiated impact on pitch accent trajectories produced in conversational interaction, we modified the visibility conditions in a spontaneous dyadic interaction task, i.e. a verbalized version of TicTacToe. Besides varying visibility, the game task allows for measuring the impact of information-structure on pitch accent trajectories, differentiating important and unpredictable game moves. Using GAMMs on four speaker groups (identified by a cluster analysis), we could isolate varying strategies of prosodic adaptation to contextual change. While few speaker groups showed a reaction to the availability of visible context cues (facial prosody or executed game moves), all groups differentiated the verbalization of unpredictable and predictable game moves with a groupspecific trajectory adaptation. The importance of game moves resulted in differentiated adaptations in two out of four speaker groups. The detected strategic trajectory adaptations were characterized by different characteristics of boundary tones, adaptations of the global f0-level, or the shape of the corresponding pitch accent

    Disfluencies in German adult- and infant-directed speech

    Get PDF
    Bellinghausen C, Betz S, Zahner K, Sasdrich A, Schröer M, Schröder B. Disfluencies in German adult- and infant-directed speech. In: Proceedings of SEFOS: 1st International Seminar on the Foundations of Speech. Breathing, Pausing and The Voice. 2019: 44-46

    Hesitation Processing Analysis Using Continuous Mouse-Tracking and Gamification

    Get PDF
    Betz S, SzĂ©kĂ©ly E, Zarrieß S, Schröer M, Schade L, Wagner P. Hesitation Processing Analysis Using Continuous Mouse-Tracking and Gamification. In: Wendemuth A, Böck R, Siegert I, eds. Elektronische Sprachsignalverarbeitung 2020. Tagungsband der 31. Konferenz. Studientexte zur Sprachkommunikation. Vol 95. Dresden: TUD Press; 2020: 85-92

    Investigating phonetic convergence of laughter in conversation

    No full text
    Ludusan B, Schröer M, Wagner P. Investigating phonetic convergence of laughter in conversation. In: Interspeech 2022. ISCA: ISCA; 2022: 1332-1336

    A multimodal account of listener feedback in face-to-face interactions

    No full text
    Rossi M, Schröer M, Ludusan B, Zellers M. A multimodal account of listener feedback in face-to-face interactions. In: Proceedings of 20th International Congress of Phonetic Sciences. 2023: 4120-4124.In face-to-face interactions, the conversational feedback produced by the listener to signal attention and participation to the current speaker is multimodal: in the vocal channel, it consists of verbal expressions (e.g., ``yes'' or ``exactly'') and vocalizations without lexical content, such as non-lexical backchannels (e.g., ``mhm'') and laughter; in the visual channel, listener feedback includes movements of the head, such as nods or tilts. In the current research, we investigate the frequency and the distribution (i.e., the location and the transition type with respect to the other interlocutor's turn) of lexical and non-lexical items, laughter and head movements, as well as the phonetic variation of vocal feedback, in face-to-face dialogues in German. We find that the feedback type influences the distribution and the variation of intensity values and voice quality, and, for multimodal items, it also influences the temporal alignment of the head movement with the vocal component

    The co-use of laughter and head gestures across speech styles

    No full text
    Ludusan B, Schröer M, Rossi M, Wagner P. The co-use of laughter and head gestures across speech styles. In: Interspeech 2023. Proceedings. ISCA; 2023: 3592-3596

    Aspekte der Camus-Rezeption in Deutschland (West und Ost) nach 1945. Eine kritische Bilanz

    No full text
    corecore