14,441 research outputs found
Asymmetrical cognitive load Imposed by processing native and non-native speech
Intonation affects information processing and comprehension. Previous research has found that some international teaching assistants (ITAs) fail to exploit English intonation, potentially posing processing difficulties to students who are native English speakers. However, researchers have also found that non-native listeners found it easier to process sentences given by a non-native speaker with a shared language background, leading to an interlanguage speech intelligibility benefit (ISIB). Therefore, how native speaker teaching assistant (NSTA)’s and ITA’s classroom speech affects the processing, comprehension, and attitudes of listeners with different language backgrounds needs to be further investigated. Using a dual-task paradigm, a comprehension questionnaire, and an attitudinal questionnaire, the present study investigates how the pronunciation and intonation of a NSTA and an ITA affect native English speakers’ and Mandarin-speaking English learners’ processing and comprehension of a lecture, and attitudes towards the two instructors. The present study found shared processing advantages when the listeners shared the L1 of the speaker, but overall lecture comprehension and attitude were unaffected. These findings support and extend prior research studies surveying ITAs’ intonational patterns and ISIB. These findings also have implications for research on the teaching of English pronunciation to non-native instructors.Published versio
Assessing the prosody of minimally to nonverbal children with autism
A procedure for assessing basic prosodic perception and
production abilities of minimally to nonverbal children and
adolescents with autism spectrum disorder is described (AP:
Assessment of Prosody). The procedure consists of three
sections: an optional primer phase, a learning phase, and an
assessment phase. It includes the assessment of both the
perception of basic pitch accent structure distinctions (low
versus high) as well as elicits expressive productions of these
contrasts. The goal of the procedure is to evaluate the extent to
which this population can perceive and produce prosodic
distinctions. The overarching aim is to create a pre and post
assessment to quantify prosodic competence and performance
of minimally to nonverbal children and adolescents who are
eligible for music-motor based intervention therapies (i.e.
AMMT: Auditory Motor Mapping Therapy). Current and
future versions of the assessment are discussed.Published versio
Dialogic Fluency - Why it Matters
Speech as an LSP: Many dialogues presented to language learners could be better described as ‘interleaved mini-monologues’, their purpose being to provide examples of grammatical sentences in realistic settings. Real dialogues, on the other hand, are worked out ‘live’, with neither speaker knowing in detail where the conversation will lead. Speaker interaction is marked to a large extent by prosody and often even good communicators sound disfluent if their half of the dialogue is judged in isolation. Dialogic fluency: The objective of dialoguing L1 speakers, however, is to realise a social or personal goal, with language only part of effective communication. Possibly the bulk of the communication devolves to prosody, shared knowledge and body language. Whereas this might not be a mainstream production goal for language learners, all users of English as an international language likely to come into contact with native speakers should be sensitised to native-speaker prosody. Influence of live dialogue on speech production: Given that the aim of an L1-L1 dialogue is not to provide learners with sample sentences, but rather to use language as a key factor in a social encounter, learners need a tool which will allow them to study the interaction of real dialogues. Of particular interest is the turn-taking behaviour of speakers, which is often flagged prosodically and produces utterances which, on the surface seem disfluent, but which on further analysis are seen to have an interactive function. The production of such a tool is the aim of the Dynamic Speech Corpus (DSC)
Acoustic, Morphological, and Functional Aspects of `yeah/ja' in Dutch, English and German
We explore different forms and functions of one of the most common feedback expressions in Dutch, English, and German, namely `yeah/ja' which is known for its multi-functionality and ambiguous usage in dialog. For example, it can be used as a yes-answer, or as a pure continuer, or as a way to show agreement. In addition, `yeah/ja' can be used in its single form, but it can also be combined with other particles, forming multi-word expressions, especially in Dutch and German. We have found substantial differences on the morpho-lexical level between the three related languages which enhances the ambiguous character of `yeah/ja'. An explorative analysis of the prosodic features of `yeah/ja' has shown that mainly a higher intensity is used to signal speaker incipiency across the inspected languages
Continuous Interaction with a Virtual Human
Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access
- …