9,187 research outputs found
Recommended from our members
What can co-speech gestures in aphasia tell us about the relationship between language and gesture?: A single case study of a participant with Conduction Aphasia
Cross-linguistic evidence suggests that language typology influences how people gesture when using ‘manner-of-motion’ verbs (Kita 2000; Kita & Özyürek 2003) and that this is due to ‘online’ lexical and syntactic choices made at the time of speaking (Kita, Özyürek, Allen, Brown, Furman & Ishizuka, 2007). This paper attempts to relate these findings to the co-speech iconic gesture used by an English speaker with conduction aphasia (LT) and five controls describing a Sylvester and Tweety1 cartoon. LT produced co-speech gesture which showed distinct patterns which we relate to different aspects of her language impairment, and the lexical and syntactic choices she made during her narrative
Grain levels in English path curvature descriptions and accompanying iconic gestures
This paper confirms that the English verb system (similar to the Finnish, Dutch and Bulgarian verb systems [22], [17]) represents path curvature at three different grain levels: neutral path curvature, global path curvature and local path curvature. We show that the three-grain-level hypothesis makes it possible to formulate constraints on English sentence structure and makes it possible to define constructions in English that refer to path curvature. We furthermore demonstrate in an experiment that the proposed English lexicalization pattern regarding path curvature in tandem with the spatial information shown to English speakers correctly predicts their packaging of grain levels in iconic gestures. We conclude that the data studied confirm Nikanne and Van der Zee’s *22] three-grain-level hypothesis in relation to English and Kita and Özyürek’s [11] Interface Hypothesis in relation to gesture production
Phonetic variability and grammatical knowledge: an articulatory study of Korean place assimilation.
The study reported here uses articulatory data to investigate Korean place assimilation
of coronal stops followed by labial or velar stops, both within words and
across words. The results show that this place-assimilation process is highly
variable, both within and across speakers, and is also sensitive to factors such as the
place of articulation of the following consonant, the presence of a word boundary
and, to some extent, speech rate. Gestures affected by the process are generally
reduced categorically (deleted), while sporadic gradient reduction of gestures is
also observed. We further compare the results for coronals to our previous findings
on the assimilation of labials, discussing implications of the results for grammatical
models of phonological/phonetic competence. The results suggest that speakers’
language-particular knowledge of place assimilation has to be relatively
detailed and context-sensitive, and has to encode systematic regularities about its
obligatory/variable application as well as categorical/gradient realisation
Detection of major ASL sign types in continuous signing for ASL recognition
In American Sign Language (ASL) as well as other signed languages, different classes of signs (e.g., lexical signs, fingerspelled signs, and classifier constructions) have different internal structural properties. Continuous sign recognition accuracy can be improved through use of distinct recognition strategies, as well as different training datasets, for each class of signs. For these strategies to be applied, continuous signing video needs to be segmented into parts corresponding to particular classes of signs. In this paper we present a multiple instance learning-based segmentation system that accurately labels 91.27% of the video frames of 500 continuous utterances (including 7 different subjects) from the publicly accessible NCSLGR corpus (Neidle and Vogler, 2012). The system uses novel feature descriptors derived from both motion and shape statistics of the regions of high local motion. The system does not require a hand tracker
Recommended from our members
Deaf and hearing children's picture naming Impact of age of acquisition and language modality on representational gesture
Stefanini, Bello, Caselli, Iverson, & Volterra (2009) reported that Italian 24-36 month old children use a high proportion of representational gestures to accompany their spoken responses when labelling pictures. The two studies reported here used the same naming task with (1) typically developing 24-46-month-old hearing children acquiring English and (2) 24-63-month-old deaf children of deaf and hearing parents acquiring British Sign Language (BSL) and spoken English. In Study 1 children scored within the range of correct spoken responses previously reported, but produced very few representational gestures. However, when they did gesture, they expressed the same action meanings as reported in previous research. The action bias was also observed in deaf children of hearing parents in Study 2, who labelled pictures with signs, spoken words and gestures. The deaf group with deaf parents used BSL almost exclusively with few additional gestures. The function of representational gestures in spoken and signed vocabulary development is considered in relation to differences between native and non-native sign language acquisition
Challenges in development of the American Sign Language Lexicon Video Dataset (ASLLVD) corpus
The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of >3,300 ASL signs in citation form, each produced by 1-6 native ASL signers, for a total of almost 9,800 tokens. This dataset, including multiple synchronized videos showing the signing from different angles, will be shared publicly once the linguistic annotations and verifications are complete. Linguistic annotations include gloss labels, sign start and end time codes, start and end handshape labels for both hands, morphological and articulatory classifications of sign type. For compound signs, the dataset includes annotations for each morpheme. To facilitate computer vision-based sign language recognition, the dataset also includes numeric ID labels for sign variants, video sequences in uncompressed-raw format, camera calibration sequences, and software for skin region extraction. We discuss here some of the challenges involved in the linguistic annotations and categorizations. We also report an example computer vision application that leverages the ASLLVD: the formulation employs a HandShapes Bayesian Network (HSBN), which models the transition probabilities between start and end handshapes in monomorphemic lexical signs. Further details and statistics for the ASLLVD dataset, as well as information about annotation conventions, are available from http://www.bu.edu/asllrp/lexicon
NEW shared & interconnected ASL resources: SignStream® 3 Software; DAI 2 for web access to linguistically annotated video corpora; and a sign bank
2017 marked the release of a new version of SignStream® software, designed to facilitate linguistic analysis of ASL video. SignStream® provides an intuitive interface for labeling and time-aligning manual and non-manual components of the signing. Version 3 has many new features. For example, it enables representation of morpho-phonological information, including display of handshapes. An expanding ASL video corpus, annotated through use of SignStream®, is shared publicly on the Web. This corpus (video plus annotations) is Web-accessible—browsable, searchable, and downloadable—thanks to a new, improved version of our Data Access Interface: DAI 2. DAI 2 also offers Web access to a brand new Sign Bank, containing about 10,000 examples of about 3,000 distinct signs, as produced by up to 9 different ASL signers. This Sign Bank is also directly accessible from within SignStream®, thereby boosting the efficiency and consistency of annotation; new items can also be added to the Sign Bank. Soon to be integrated into SignStream® 3 and DAI 2 are visualizations of computer-generated analyses of the video: graphical display of eyebrow height, eye aperture, an
How Do Gestures Influence Thinking and Speaking? The Gesture-for-Conceptualization Hypothesis.
Peer reviewedPostprin
- …