5,013 research outputs found
Recommended from our members
What can co-speech gestures in aphasia tell us about the relationship between language and gesture?: A single case study of a participant with Conduction Aphasia
Cross-linguistic evidence suggests that language typology influences how people gesture when using âmanner-of-motionâ verbs (Kita 2000; Kita & ĂzyĂŒrek 2003) and that this is due to âonlineâ lexical and syntactic choices made at the time of speaking (Kita, ĂzyĂŒrek, Allen, Brown, Furman & Ishizuka, 2007). This paper attempts to relate these findings to the co-speech iconic gesture used by an English speaker with conduction aphasia (LT) and five controls describing a Sylvester and Tweety1 cartoon. LT produced co-speech gesture which showed distinct patterns which we relate to different aspects of her language impairment, and the lexical and syntactic choices she made during her narrative
Layers of generality and types of generalization in pattern activities
Pattern generalization is considered one of the prominent routes for in-troducing students to algebra. However, not all generalizations are al-gebraic. In the use of pattern generalization as a route to algebra, we âteachers and educatorsâ thus have to remain vigilant in order not to confound algebraic generalizations with other forms of dealing with the general. But how to distinguish between algebraic and non-algebraic generalizations? On epistemological and semiotic grounds, in this arti-cle I suggest a characterization of algebraic generalizations. This char-acterization helps to bring about a typology of algebraic and arithmetic generalizations. The typology is illustrated with classroom examples
Towards the Design of a Natural User Interface for Performing and Learning Musical Gestures
AbstractA large variety of musical instruments, either acoustical or digital, are based on a keyboard scheme. Keyboard instruments can produce sounds through acoustic means but they are increasingly used to control digital sound synthesis processes with nowadays music. Interestingly, with all the different possibilities of sonic outcomes, the input remains a musical gesture. In this paper we present the conceptualization of a Natural User Interface (NUI), named the Intangible Musical Instrument (IMI), aiming to support both learning of expert musical gestures and performing music as a unified user experience. The IMI is designed to recognize metaphors of pianistic gestures, focusing on subtle uses of fingers and upper-body. Based on a typology of musical gestures, a gesture vocabulary has been created, hierarchized from basic to complex. These piano-like gestures are finally recognized and transformed into sounds
Grain levels in English path curvature descriptions and accompanying iconic gestures
This paper confirms that the English verb system (similar to the Finnish, Dutch and Bulgarian verb systems [22], [17]) represents path curvature at three different grain levels: neutral path curvature, global path curvature and local path curvature. We show that the three-grain-level hypothesis makes it possible to formulate constraints on English sentence structure and makes it possible to define constructions in English that refer to path curvature. We furthermore demonstrate in an experiment that the proposed English lexicalization pattern regarding path curvature in tandem with the spatial information shown to English speakers correctly predicts their packaging of grain levels in iconic gestures. We conclude that the data studied confirm Nikanne and Van der Zeeâs *22] three-grain-level hypothesis in relation to English and Kita and ĂzyĂŒrekâs [11] Interface Hypothesis in relation to gesture production
Recommended from our members
The challenges of viewpoint-taking when learning a sign language: Data from the 'frog story' in British Sign Language
Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learnersâ and deaf signersâ narratives did not differ in overall duration, learnersâ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously
French-English bilingual childrenâs motion event communication shows crosslinguistic influence in speech but not gesture
Bilinguals sometimes show crosslinguistic influence from one language to another while speaking (or gesturing). Adult bilinguals have also shown crosslinguistic influence in gestures as well as speech, suggesting an underlying conceptualization that is similar for both languages. The primary purpose of the present study is to test if the same is true of simultaneous French-English bilingual children in speaking and gesturing about motion. If so, they might show different patterns from both French and English monolinguals. Furthermore, we examined whether there were developmental changes between early and middle childhood. French-English bilingual and French and English monolingual children watched two cartoons and described them. In speech, the bilinguals differed from the English monolinguals, using more lexicalizations of the Path of motion in token numbers but not in type. They did not differ from the French monolinguals. In gestures, all children used a majority of Path gestures. There were few age-related changes. We argue that in speech, the bilinguals conceptualize their two languages differently, but show some crosslinguistic influence due to processing. Gestures may not show this same pattern, because they serve to highlight the important parts of the discourse
Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications
LĂŒcking A, Bergmann K, Hahn F, Kopp S, Rieser H. Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications. Journal on Multimodal User Interfaces. 2013;7(1-2):5-18.Communicating face-to-face, interlocutors frequently produce multimodal meaning packages consisting of speech and accompanying gestures. We discuss a systematically annotated speech and gesture corpus consisting of 25 route-and-landmark-description dialogues, the Bielefeld Speech and Gesture Alignment corpus (SaGA), collected in experimental face-to-face settings. We first describe the primary and secondary data of the corpus and its reliability assessment. Then we go into some of the projects carried out using SaGA demonstrating the wide range of its usability: on the empirical side, there is work on gesture typology, individual and contextual parameters influencing gesture production and gesturesâ functions for dialogue structure. Speech-gesture interfaces have been established extending unification-based grammars. In addition, the development of a computational model of speech-gesture alignment and its implementation constitutes a research line we focus on
- âŠ