255 research outputs found
Offscreen and in the chair next to your: conversational agents speaking through actual human bodies
his paper demonstrates how to interact with a conversational agent that speaks through an actual human body face-to-face and in person (i.e., offscreen). This is made possible by the cyranoid method: a technique involving a human person speech shadowing for a remote third-party (i.e., receiving their words via a covert audio-relay apparatus and repeating them aloud in real-time). When a person shadows for an artificial conversational agent source, we call the resulting hybrid an âechoborg.â We report a study in which people encountered conversational agents either through a human shadower face-to-face or via a text interface under conditions where they assumed their interlocutor to be an actual person. Our results show that the perception of a conversational agent is dramatically altered when the agent is voiced by an actual, tangible person. We discuss the potential implications this methodology has for the development of conversational agents and general person perception research
How many words do you need to speak Arabic? An Arabic vocabulary size test
This study describes a vocabulary size test in Arabic used with 339 nativespeaking learners at school and university in Saudi Arabia. Native speakervocabulary size scores should provide targets for attainment for learners ofArabic, should inform the writers of course books and teaching materials,and the test itself should allow learners to monitor their progress towardsthe goal of fluency. Educated native speakers of Arabic possess arecognition vocabulary about 25,000 words, a total which is largecompared with equivalent test scores of native speakers of English. Theresults also suggest that acquisition increases in speed with age and thisis tentatively explained by the highly regular system of morphologicalderivation which Arabic uses and which, it is thought, is acquired inadolescence. This again appears different from English where the rate ofacquisition appears to decline with age. While the test appears reliableand valid, there are issues surrounding the definition of a word in Arabicand further research into how words are stored, retrieved and processedin Arabic is needed to inform the construction of further tests whichmight, it is thought, profitably use a more encompassing definition ofthe lemma as the basis for testing
Recommended from our members
Segmentation of British Sign Language (BSL): Mind the gap!
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms
Neurophysiological evidence for rapid processing of verbal and gestural information in understanding communicative actions
During everyday social interaction, gestures are a fundamental part of human communication. The communicative pragmatic role of hand gestures and their interaction with spoken language has been documented at the earliest stage of language development, in which two types of indexical gestures are most prominent: the pointing gesture for directing attention to objects and the give-me gesture for making requests. Here we study, in adult human participants, the neurophysiological signatures of gestural-linguistic acts of communicating the pragmatic intentions of naming and requesting by simultaneously presenting written words and gestures. Already at ~150 ms, brain responses diverged between naming and request actions expressed by word-gesture combination, whereas the same gestures presented in isolation elicited their earliest neurophysiological dissociations significantly later (at ~210âms). There was an early enhancement of request-evoked brain activity as compared with naming, which was due to sources in the frontocentral cortex, consistent with access to action knowledge in request understanding. In addition, an enhanced N400-like response indicated late semantic integration of gesture-language interaction. The present study demonstrates that word-gesture combinations used to express communicative pragmatic intentions speed up the brain correlates of comprehension processes â compared with gesture-only understanding â thereby calling into question current serial linguistic models viewing pragmatic function decoding at the end of a language comprehension cascade. Instead, information about the social-interactive role of communicative acts is processed instantaneously
- âŠ