4,044 research outputs found

    Eye-to-face Gaze in Stuttered Versus Fluent Speech

    Get PDF
    The present study investigated the effects of viewing audio-visual presentations of stuttered relative to fluent speech samples on the ocular reactions of participants. Ten adults, 5 males and 5 females, aged 18-55 who had a negative history of any speech, language and hearing disorders participated in the study. Participants were shown three 30 second audio-visual recordings of stuttered speech, and three 30 second audio-visual recordings of fluent speech, with a three second break (black screen) between the presentation of each video. All three individuals who stutter were rated as ‘severe’ (SSI-3, Riley, 1994), exhibiting high levels of struggle filled with overt stuttering behaviors such as repetitions, prolongations and silent postural fixations on speech sounds, in addition to tension-filled secondary behaviors such as head jerks, lip protrusion, and facial grimaces. During stuttered and fluent conditions, ocular behaviors of the viewers including pupillary movement, fixation time, eye-blink, and relative changes in pupil diameter were recorded using the Arrington ViewPoint Eye-Tracker infrared camera and the system’s data analysis software (e.g., Wong & Cronin-Colomb & Neargarder, 2005) via a 2.8GHz Dell Optiplex GX270 computer. For all ocular measures except fixation time, there were significant (p\u3c.05) differences for stuttered relative to fluent speech. There was an increase in the number of pupillary movements, blinks, and relative change in pupil diameter and a decrease in time fixated when viewing stuttered relative to fluent speech samples. While not significant, participants fixated or directed their attention for less time during stuttered than fluent conditions, indicating decreased attention overall during stuttered speech samples. Increases in eye-blink data and pupil-dilation data were also significant. Because both eye-blink, as a measure of the startle reflex, and pupil-dilation are resistant to voluntary control or are completely under the control of the autonomic nervous system, significant increases in both for stuttered relative to fluent speech indicate a visceral reaction to stuttering

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 165, March 1977

    Get PDF
    This bibliography lists 198 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1977

    Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 141)

    Get PDF
    This special bibliography lists 267 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1975

    Multi-Sensory Emotion Recognition with Speech and Facial Expression

    Get PDF
    Emotion plays an important role in human beings’ daily lives. Understanding emotions and recognizing how to react to others’ feelings are fundamental to engaging in successful social interactions. Currently, emotion recognition is not only significant in human beings’ daily lives, but also a hot topic in academic research, as new techniques such as emotion recognition from speech context inspires us as to how emotions are related to the content we are uttering. The demand and importance of emotion recognition have highly increased in many applications in recent years, such as video games, human computer interaction, cognitive computing, and affective computing. Emotion recognition can be done from many sources including text, speech, hand, and body gesture as well as facial expression. Presently, most of the emotion recognition methods only use one of these sources. The emotion of human beings changes every second and using a single way to process the emotion recognition may not reflect the emotion correctly. This research is motivated by the desire to understand and evaluate human beings’ emotion from multiple ways such as speech and facial expressions. In this dissertation, multi-sensory emotion recognition has been exploited. The proposed framework can recognize emotion from speech, facial expression, and both of them. There are three important parts in the design of the system: the facial emotion recognizer, the speech emotion recognizer, and the information fusion. The information fusion part uses the results from the speech emotion recognition and facial emotion recognition. Then, a novel weighted method is used to integrate the results, and a final decision of the emotion is given after the fusion. The experiments show that with the weighted fusion methods, the accuracy can be improved to an average of 3.66% compared to fusion without adding weight. The improvement of the recognition rate can reach 18.27% and 5.66% compared to the speech emotion recognition and facial expression recognition, respectively. By improving the emotion recognition accuracy, the proposed multi-sensory emotion recognition system can help to improve the naturalness of human computer interaction

    Brainstem dysfunction protects against syncope in multiple sclerosis

    Get PDF
    BACKGROUND: The aim of this study was to investigate the correlation between autonomic dysfunction in multiple sclerosis (MS) and brainstem dysfunction evaluated with the vestibular evoked myogenic potentials (VEMP) score and conventional MRI. ----- METHODS: Forty-five patients with the diagnosis of clinically isolated syndrome (CIS) suggestive of MS were enrolled. VEMP, heart rate, and blood pressure responses to the Valsalva maneuver, heart rate response to deep breathing, and pain provoked head-up tilt table test, as well as brain and spinal cord MRI were performed. ----- RESULTS: There was no difference in the VEMP score between patients with and without signs of sympathetic or parasympathetic dysfunction. However, patients with syncope had significantly lower VEMP score compared to patients without syncope (p<0.01). Patients with orthostatic hypotension (OH) showed a trend of higher VEMP score compared to patients without OH (p=0.06). There was no difference in the presence of lesions in the brainstem or cervical spinal cord between patients with or without any of the studied autonomic parameters. The model consisting of a VEMP score of ≤5 and normal MRI of the midbrain and cervical spinal cord has sensitivity and specificity of 83% for the possibility that the patient with MS can develop syncope. ----- CONCLUSIONS: Pathophysiological mechanisms underlying functional and structural disorders of autonomic nervous system in MS differ significantly. While preserved brainstem function is needed for development of syncope, structural disorders like OH could be associated with brainstem dysfunction

    Multi-Sensory Emotion Recognition with Speech and Facial Expression

    Get PDF
    Emotion plays an important role in human beings’ daily lives. Understanding emotions and recognizing how to react to others’ feelings are fundamental to engaging in successful social interactions. Currently, emotion recognition is not only significant in human beings’ daily lives, but also a hot topic in academic research, as new techniques such as emotion recognition from speech context inspires us as to how emotions are related to the content we are uttering. The demand and importance of emotion recognition have highly increased in many applications in recent years, such as video games, human computer interaction, cognitive computing, and affective computing. Emotion recognition can be done from many sources including text, speech, hand, and body gesture as well as facial expression. Presently, most of the emotion recognition methods only use one of these sources. The emotion of human beings changes every second and using a single way to process the emotion recognition may not reflect the emotion correctly. This research is motivated by the desire to understand and evaluate human beings’ emotion from multiple ways such as speech and facial expressions. In this dissertation, multi-sensory emotion recognition has been exploited. The proposed framework can recognize emotion from speech, facial expression, and both of them. There are three important parts in the design of the system: the facial emotion recognizer, the speech emotion recognizer, and the information fusion. The information fusion part uses the results from the speech emotion recognition and facial emotion recognition. Then, a novel weighted method is used to integrate the results, and a final decision of the emotion is given after the fusion. The experiments show that with the weighted fusion methods, the accuracy can be improved to an average of 3.66% compared to fusion without adding weight. The improvement of the recognition rate can reach 18.27% and 5.66% compared to the speech emotion recognition and facial expression recognition, respectively. By improving the emotion recognition accuracy, the proposed multi-sensory emotion recognition system can help to improve the naturalness of human computer interaction

    Impaired socio-emotional processing in a developmental music disorder

    Get PDF
    Some individuals show a congenital deficit for music processing despite normal peripheral auditory processing, cognitive functioning, and music exposure. This condition, termed congenital amusia, is typically approached regarding its profile of musical and pitch difficulties. Here, we examine whether amusia also affects socio-emotional processing, probing auditory and visual domains. Thirteen adults with amusia and 11 controls completed two experiments. In Experiment 1, participants judged emotions in emotional speech prosody, nonverbal vocalizations (e.g., crying), and (silent) facial expressions. Target emotions were: amusement, anger, disgust, fear, pleasure, relief, and sadness. Compared to controls, amusics were impaired for all stimulus types, and the magnitude of their impairment was similar for auditory and visual emotions. In Experiment 2, participants listened to spontaneous and posed laughs, and either inferred the authenticity of the speaker’s state, or judged how much laughs were contagious. Amusics showed decreased sensitivity to laughter authenticity, but normal contagion responses. Across the experiments, mixed-effects models revealed that the acoustic features of vocal signals predicted socio-emotional evaluations in both groups, but the profile of predictive acoustic features was different in amusia. These findings suggest that a developmental music disorder can affect socio-emotional cognition in subtle ways, an impairment not restricted to auditory information

    Sound, mind and emotion - research and aspects

    Get PDF
    Sound, mind and emotion Rapport nr 8 (seminarier 1 jan, 11 april resp 25 maj 2008) Innehåll: Patrik Juslin: Sound of music - Seven ways in which the brain can evoke emotions from sounds, Ulf Rosenhall: Auditory problems - not only an issue of impaired hearing, Sören Nielzén, Olle Olsson, Johan Källstrand & Sara Nehlstedt: The role of psychoacoustics for the research on neuro-psychiatric states - Theoretical basis of the S-Detect method, Gerhard Andersson: Tinnitus and Hypersensititvity to Sounds, Kerstin Persson - Waye: "It sounds like a buzzing wasp in my head" - children's perspective of the sound environment in pre-schools, Björn Lyxell, Erik Borg & Inga-Stina Olsson: Cognitive skills and perceived effort in active and passive listening in a naturalistic sound environment, Åke Iwar: Sound, Catastrophy and Trauma, Kerstin Bergh Johannesson: Sounds as triggers - How traumatic memories can be processed by Eye Movement Desensitization and Reprocessing. 138

    Update On Hearing Loss

    Get PDF
    Update on Hearing Loss encompasses both the theoretical background on the different forms of hearing loss and a detailed knowledge on state-of-the-art treatment for hearing loss, written for clinicians by specialists and researchers. Realizing the complexity of hearing loss has highlighted the importance of interdisciplinary research. Therefore, all the authors contributing to this book were chosen from many different specialties of medicine, including surgery, psychology, and neuroscience, and came from diverse areas of expertise, such as neurology, otolaryngology, psychiatry, and clinical and experimental audiology

    An examination of speech characteristics under conditions of affective reactivity and variable cognitive load as distinguishing feigned and genuine schizophrenia

    Get PDF
    Proper assessment of schizophrenia is complicated by the need for clinicians to be cognizant of the possibility of malingering, i.e., the intentional production of false or grossly exaggerated symptoms, motivated by external incentives. Current standardized schizophrenia malingering detection methods rely on endorsement of improbable or exaggerated, mainly positive, symptoms. However, these detection methods may be vulnerable to successful manipulation by sophisticated malingerers, particularly if coached regarding response style assessment strategies. This paper explored the utility of supplemental variables to examine in schizophrenia malingering detection by using a simulation study design to compare schizophrenia patients, a community participant sample instructed to feign schizophrenia symptoms, and an honest responder control group on behavioral speech characteristics indicative of thought disorganization (i.e., referential disturbances) and negative symptoms (i.e., alogia and flat affect) under experimentally-manipulated conditions of affective reactivity and cognitive load. Results indicated that the feigning group was distinguishable from the schizophrenia group based on differences in magnitude of speech disorganization during conditions of affective reactivity, due to feigners’ inability to mimic the schizophrenia group’s referential failures, and in magnitude of flat affect during conditions of affective reactivity and cognitive load, due to feigners’ excessively impaired use of formant inflection (i.e., vocal inflection related to tongue movement). Feigning and schizophrenia groups were also distinguishable due to feigners’ excessive impairment in cognitive task performance, observed both in group comparisons and differential patterns of change in cognitive task accuracy across cognitive load conditions
    • …
    corecore