17,967 research outputs found

    Investigating the dual function of gesture in blind and visually impaired children. (Poster)

    Get PDF
    Co-speech gesture research explores the role of gesture in communication, i.e. whether gestures are intended for the listener/audience (e.g. Mol et al. 2009; Alibali et al., 2001; Holler & Beattie, 2003) or support the process of speech production (Kita & Davies, 2009; Hostetter et al. 2007). To investigate the role of gesture in communication we turn to blind and visually impaired speakers whose opportunities to learn gestures visually are limited (cf. Iverson & Goldin-Meadow 1998; 2001). The present study aims at providing insight into the nature and occurrence of co-speech gestures in spontaneous speech: between blind, severely visually impaired and sighted individuals. Participants were asked to read a short story (either in print or in Braille) and to re-tell it to the interviewer. Care was taken to establish an environment in which the participants would feel safe and would not refrain from gesturing for fear of hurting themselves or others. We predicted that if blind speakers did not gesture as much as their visually impaired peers it would suggest that gesture is to some extent acquired through visual instruction. However, following Iverson et al. (2000) and Iverson and Goldin-Meadow (1998) we hypothesized that despite the absence of visual gestural stimuli during the language-learning process gesture is present in the language of the blind participants - but there would be differences in gesture form, types and functions. The present study aims at exploring and categorizing these differences, with regard to how sensory references are visible in the gestures of participants with various degrees of sight impairment. Regardless of dissimilarities, the presence of gesture in both the blind and impaired individuals points towards a dual function of co-speech gestures, i.e. a device for both the speaker and their interlocutor

    HAND GESTURE RECOGNITION

    Get PDF
    The most important methodology used for communication between hearing and speech impaired person is the sign language. So, it becomes a necessary task to create a bridge between the two persons who wants to communicate. Many algorithms have been developed in the recent years, to help people who are not aware of the sign language but very few with good results exist. The difficult part in the hand gesture recognition is the segmentation of the hand or segregation of the hand and identifying the hand gesture.  This paper describes the some possible ways of segmentation using RGB color spaces and models and presents the best algorithm with highest accuracy to perform the same. Various experiments were conducted for different gestures and results were obtained with accuracy. The algorithms were implemented in MATLAB programming language

    Imitation, mirror neurons and autism

    Get PDF
    Various deficits in the cognitive functioning of people with autism have been documented in recent years but these provide only partial explanations for the condition. We focus instead on an imitative disturbance involving difficulties both in copying actions and in inhibiting more stereotyped mimicking, such as echolalia. A candidate for the neural basis of this disturbance may be found in a recently discovered class of neurons in frontal cortex, 'mirror neurons' (MNs). These neurons show activity in relation both to specific actions performed by self and matching actions performed by others, providing a potential bridge between minds. MN systems exist in primates without imitative and ‘theory of mind’ abilities and we suggest that in order for them to have become utilized to perform social cognitive functions, sophisticated cortical neuronal systems have evolved in which MNs function as key elements. Early developmental failures of MN systems are likely to result in a consequent cascade of developmental impairments characterised by the clinical syndrome of autism

    Affective Medicine: a review of Affective Computing efforts in Medical Informatics

    Get PDF
    Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as “computing that relates to, arises from, or deliberately influences emotions”. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field

    Syntax

    Full text link
    corecore