98,301 research outputs found
Recommended from our members
Transcription of child sign language: A focus on narrative
This paper describes some general difficulties in analysing child sign language data with an emphasis on the process of transcription. The particular issue of capturing how signers encode simultaneity in narrative is discussed
Recommended from our members
The challenges of viewpoint-taking when learning a sign language: Data from the 'frog story' in British Sign Language
Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learners’ and deaf signers’ narratives did not differ in overall duration, learners’ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously
Recommended from our members
'Children are just lingual': The development of phonology in British Sign Language (BSL)
This paper explores three universal tendencies in spoken language acquisition: consonant and vowel harmony, cluster reduction and systemic simplification, using a corpus of 1018 signs from a single child exposed to British Sign Language from birth. Child signs were recorded from naturalistic deaf parent-deaf child interaction between the ages of 19-24 months. Child errors were analysed by handshape, movement and location segments, as well as the accurate production of prosodic features, using an autosegmental phonology approach. Unadult like forms at this age were observed with 41% of handshapes, 45% of movements and 25% of locations. There were 47% of signs produced with unadult like prosodic features. Analysis of the results concludes that early child signing broadly follows proposed universal tendencies in language acquisition
Recommended from our members
The first signs of language: Phonological development in British sign language
A total of 1018 signs in one deaf child’s naturalistic interaction with her deaf mother, between the ages 19-24 months were analysed. This study summarises regular modification processes in the phonology of the child sign’s handshape, location, movement and prosody. Firstly changes to signs were explained by the notion of phonological markedness. Secondly, the child managed her production of first signs through two universal processes: structural change and substitution. Constraints unique to the visual modality also caused sign language specific acquisition patterns, namely: more errors for handshape articulation in locations in peripheral vision, a high frequency of whole sign repetitions and feature group rather than one-to-one phoneme substitutions as in spoken language development
The Role of Multiple Articulatory Channels of Sign-Supported Speech Revealed by Visual Processing
Purpose
The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed in relation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication.
Method
Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either the face or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message.
Results
In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased in the magnified condition. In Experiment 2, results indicated less accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech.
Conclusions
All participants, even those with residual hearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions.Unión Europea, Grant Agreement 31674
The ‘Role’ of the Community/Public Service Interpreter
This paper discusses the problematic nature of the concept of role as defined by professional sign language interpreters . The authors argue for a more rational approach that takes into account the expected behaviours of the monolingual participants in the interpreted interaction
Teaching robots parametrized executable plans through spoken interaction
While operating in domestic environments, robots will necessarily
face difficulties not envisioned by their developers at programming
time. Moreover, the tasks to be performed by a robot will often
have to be specialized and/or adapted to the needs of specific users
and specific environments. Hence, learning how to operate by interacting
with the user seems a key enabling feature to support the
introduction of robots in everyday environments.
In this paper we contribute a novel approach for learning, through
the interaction with the user, task descriptions that are defined as a
combination of primitive actions. The proposed approach makes
a significant step forward by making task descriptions parametric
with respect to domain specific semantic categories. Moreover, by
mapping the task representation into a task representation language,
we are able to express complex execution paradigms and to revise
the learned tasks in a high-level fashion. The approach is evaluated
in multiple practical applications with a service robot
- …