335,047 research outputs found
Recommended from our members
Gesture production and comprehension in children with specific language impairment
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups
Recommended from our members
A novel word-independent gesture-typing continuous authentication scheme for mobile devices
In this study, we produce a new continuous authentication scheme for gesture-typing on mobile devices. Our scheme is the first scheme that authenticates gesture-typing interactions in a word-independent format. The scheme relies on groupings of features extracted from the word gesture after it has been reduced to parts common to all gestures. We show that movement sensors are also important in differentiating between users. We describe the feature extraction processes and analyse our proposed feature set. The unique process of our authentication scheme is presented and described. We collect our own gesture typing dataset including data collected during sitting, standing and walking activities for realism. We test our features against state-of-the-art touch-screen interaction features and compare feature extraction times on real mobile devices. Our scheme authenticates users with an equal error rate of 3.58% for a single word-gesture. The equal error rate is reduced to 0.81% when 3 word-gestures are used to authenticate
Recommended from our members
Gesture and speech integration: an exploratory study of a man with aphasia
Background: In order to fully comprehend a speaker’s intention in everyday communication, we integrate information from multiple sources including gesture and speech. There are no published studies that have explored the impact of aphasia on iconic co-speech gesture and speech integration.
Aims: To explore the impact of aphasia on co-speech gesture and speech integration in one participant with aphasia (SR) and 20 age-matched control participants.
Methods & Procedures: SR and 20 control participants watched video vignettes of people producing 21 verb phrases in 3 different conditions, verbal only (V), gesture only (G) and verbal gesture combined (VG). Participants were required to select a corresponding picture from one of four alternatives: integration target, a verbal only match, a gesture only match, and an unrelated foil. The probability of choosing the integration target in the VG that goes beyond what is expected from the probabilities of choosing the integration target in V and G was referred to as multi-modal gain(MMG).
Outcomes & Results: SR obtained a significantly lower multi-modal gain score than the control participants (p<0.05). Error analysis indicated that in speech and gesture integration tasks, SR relied on gesture in order to decode the message, whereas the control participants relied on speech in order to decode the message. Further analysis
of the speech only and gesture only tasks indicated SR had intact gesture comprehension but impaired spoken word comprehension.
Conclusions & Implications: The results confirm findings by Records (1994) which reported that impaired verbal comprehension leads to a greater reliance on gesture to
decode messages. Moreover, multi-modal integration of information from speech and iconic gesture can be impaired in aphasia. The findings highlight the need for further exploration of the impact of aphasia on gesture and speech integration
Classifying types of gesture and inferring intent
In order to infer intent from gesture, a rudimentary classification of types of gestures into five main classes is introduced. The classification is intended as a basis for incorporating the understanding of gesture into human-robot interaction (HRI). Some requirements for the operational classification of gesture by a robot interacting with humans are also suggested
- …