230,425 research outputs found
Recommended from our members
Gesture and speech integration: an exploratory study of a man with aphasia
Background: In order to fully comprehend a speaker’s intention in everyday communication, we integrate information from multiple sources including gesture and speech. There are no published studies that have explored the impact of aphasia on iconic co-speech gesture and speech integration.
Aims: To explore the impact of aphasia on co-speech gesture and speech integration in one participant with aphasia (SR) and 20 age-matched control participants.
Methods & Procedures: SR and 20 control participants watched video vignettes of people producing 21 verb phrases in 3 different conditions, verbal only (V), gesture only (G) and verbal gesture combined (VG). Participants were required to select a corresponding picture from one of four alternatives: integration target, a verbal only match, a gesture only match, and an unrelated foil. The probability of choosing the integration target in the VG that goes beyond what is expected from the probabilities of choosing the integration target in V and G was referred to as multi-modal gain(MMG).
Outcomes & Results: SR obtained a significantly lower multi-modal gain score than the control participants (p<0.05). Error analysis indicated that in speech and gesture integration tasks, SR relied on gesture in order to decode the message, whereas the control participants relied on speech in order to decode the message. Further analysis
of the speech only and gesture only tasks indicated SR had intact gesture comprehension but impaired spoken word comprehension.
Conclusions & Implications: The results confirm findings by Records (1994) which reported that impaired verbal comprehension leads to a greater reliance on gesture to
decode messages. Moreover, multi-modal integration of information from speech and iconic gesture can be impaired in aphasia. The findings highlight the need for further exploration of the impact of aphasia on gesture and speech integration
A fast algorithm for vision-based hand gesture recognition for robot control
We propose a fast algorithm for automatically recognizing a limited set of gestures from hand images for a robot control application. Hand gesture recognition is a challenging problem in its general form. We consider a fixed set of manual commands and a reasonably structured environment, and develop a simple, yet effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture. The algorithm is invariant to translation, rotation, and scale of the hand. We demonstrate the effectiveness of the technique on real imagery
Wearable Capacitive-based Wrist-worn Gesture Sensing System
Gesture control plays an increasingly significant role in modern human-machine interactions. This paper presents an innovative method of gesture recognition using flexible capacitive pressure sensor attached on user’s wrist towards computer vision and connecting senses on fingers. The method is based on the pressure variations around the wrist when the gesture changes. Flexible and ultrathin capacitive pressure sensors are deployed to capture the pressure variations. The embedding of sensors on a flexible substrate and obtain the relevant capacitance require a reliable approach based on a microcontroller to measure a small change of capacitive sensor. This paper is addressing these challenges, collect and process the measured capacitance values through a developed programming on LabVIEW to reconstruct the gesture on computer. Compared to the conventional approaches, the wrist-worn sensing method offerings a low-cost, lightweight and wearable prototype on the user’s body. The experimental result shows that the potentiality and benefits of this approach and confirms that accuracy and number of recognizable gestures can be improved by increasing number of sensor
Gesture Typing on Virtual Tabletop: Effect of Input Dimensions on Performance
The association of tabletop interaction with gesture typing presents interaction potential for situationally or physically impaired users. In this work, we use depth cameras to create touch surfaces on regular tabletops. We describe our prototype system and report on a supervised learning approach to fingertips touch classification. We follow with a gesture typing study that compares our system with a control tablet scenario and explore the influence of input size and aspect ratio of the virtual surface on the text input performance. We show that novice users perform with the same error rate at half the input rate with our system as compared to the control condition, that an input size between A5 and A4 present the best tradeoff between performance and user preference and that users' indirect tracking ability seems to be the overall performance limiting factor
- …