1,906 research outputs found

    Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 314)

    Get PDF
    This bibliography lists 139 reports, articles, and other documents introduced into the NASA scientific and technical information system in August, 1988

    Multimodal Polynomial Fusion for Detecting Driver Distraction

    Full text link
    Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone. Although there has been a considerable amount of research on modeling the distracted behavior of drivers under various conditions, accurate automatic detection using multiple modalities and especially the contribution of using the speech modality to improve accuracy has received little attention. This paper introduces a new multimodal dataset for distracted driving behavior and discusses automatic distraction detection using features from three modalities: facial expression, speech and car signals. Detailed multimodal feature analysis shows that adding more modalities monotonically increases the predictive accuracy of the model. Finally, a simple and effective multimodal fusion technique using a polynomial fusion layer shows superior distraction detection results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201

    On the analysis of EEG power, frequency and asymmetry in Parkinson's disease during emotion processing

    Get PDF
    Objective: While Parkinson’s disease (PD) has traditionally been described as a movement disorder, there is growing evidence of disruption in emotion information processing associated with the disease. The aim of this study was to investigate whether there are specific electroencephalographic (EEG) characteristics that discriminate PD patients and normal controls during emotion information processing. Method: EEG recordings from 14 scalp sites were collected from 20 PD patients and 30 age-matched normal controls. Multimodal (audio-visual) stimuli were presented to evoke specific targeted emotional states such as happiness, sadness, fear, anger, surprise and disgust. Absolute and relative power, frequency and asymmetry measures derived from spectrally analyzed EEGs were subjected to repeated ANOVA measures for group comparisons as well as to discriminate function analysis to examine their utility as classification indices. In addition, subjective ratings were obtained for the used emotional stimuli. Results: Behaviorally, PD patients showed no impairments in emotion recognition as measured by subjective ratings. Compared with normal controls, PD patients evidenced smaller overall relative delta, theta, alpha and beta power, and at bilateral anterior regions smaller absolute theta, alpha, and beta power and higher mean total spectrum frequency across different emotional states. Inter-hemispheric theta, alpha, and beta power asymmetry index differences were noted, with controls exhibiting greater right than left hemisphere activation. Whereas intra-hemispheric alpha power asymmetry reduction was exhibited in patients bilaterally at all regions. Discriminant analysis correctly classified 95.0% of the patients and controls during emotional stimuli. Conclusion: These distributed spectral powers in different frequency bands might provide meaningful information about emotional processing in PD patients

    The importance of "scaffolding" in clinical approach to deafness across the lifespan

    Get PDF
    Throughout the present work of thesis, the concept of scaffolding will be used as a fil rouge through the chapters. What I mean for “scaffolding approach”, therefore, is an integrated and multidisciplinary clinical and research methodology to hearing impairments that could take into account persons as a whole; an approach that needs to be continuously adapted and harmonized with the individuals, pursuant to their progress, their limits and resources, in consideration of their audiological, cognitive, emotional, personal, and social characteristics. The following studies of our research group will be presented: A study (2020) designed to assess the effects of parent training (PT) on enhancing children’s communication development (chapter two); Two studies of our research group (2016; 2020) concerning variables influencing comprehension of emotions and nuclear executive functions in deaf children with cochlear implant (chapter three and chapter four) In chapter five a presentation and description of our Mind-Active Communication program, main topics and aims, multidisciplinary organizations of group and individual sessions with a description of used materials and methodology is given. Finally, a preliminary evaluation to explore the use of this multidisciplinary rehabilitative program on quality of life, psychological wellbeing, and hearing abilities in a sample of cochlear implanted elderly persons is reported

    Inter-hemispheric EEG coherence analysis in Parkinson's disease : Assessing brain activity during emotion processing

    Get PDF
    Parkinson’s disease (PD) is not only characterized by its prominent motor symptoms but also associated with disturbances in cognitive and emotional functioning. The objective of the present study was to investigate the influence of emotion processing on inter-hemispheric electroencephalography (EEG) coherence in PD. Multimodal emotional stimuli (happiness, sadness, fear, anger, surprise, and disgust) were presented to 20 PD patients and 30 age-, education level-, and gender-matched healthy controls (HC) while EEG was recorded. Inter-hemispheric coherence was computed from seven homologous EEG electrode pairs (AF3–AF4, F7–F8, F3–F4, FC5–FC6, T7–T8, P7–P8, and O1–O2) for delta, theta, alpha, beta, and gamma frequency bands. In addition, subjective ratings were obtained for a representative of emotional stimuli. Interhemispherically, PD patients showed significantly lower coherence in theta, alpha, beta, and gamma frequency bands than HC during emotion processing. No significant changes were found in the delta frequency band coherence. We also found that PD patients were more impaired in recognizing negative emotions (sadness, fear, anger, and disgust) than relatively positive emotions (happiness and surprise). Behaviorally, PD patients did not show impairment in emotion recognition as measured by subjective ratings. These findings suggest that PD patients may have an impairment of inter-hemispheric functional connectivity (i.e., a decline in cortical connectivity) during emotion processing. This study may increase the awareness of EEG emotional response studies in clinical practice to uncover potential neurophysiologic abnormalities

    Real-world listening effort in adult cochlear implant users

    Get PDF
    Cochlear implants (CI) are a treatment to provide a sense of hearing to individuals with severe-to-profound sensorineural hearing loss. Even when optimal levels of intelligibility are achieved after cochlear implantation, many CI users complain about the effort required to understand speech in everyday life contexts. This sustained mental exertion, commonly known as “listening effort”, could negatively affect their lives, especially regarding communication, participation, and long-term cognitive health. This thesis aimed to evaluate the listening effort experienced by CI recipients in real-world sound scenarios. The research focused on social listening situations that are particularly common in everyday life such as having conversations in a busy café or communicating through video call. Additionally, some situations that prevailed during the COVID-19 pandemic were also examined (e.g., listening to someone who is wearing a facemask). Multimodal measures of listening effort were employed throughout the research project to obtain a comprehensive assessment. Nonetheless, the primary focus was on measures that quantify objectively the cognitive demands of listening through a CI. To that end, we used a combination of physiological measures, functional near infrared spectroscopy (fNIRS) brain imaging and simultaneous pupillometry, both of which are compatible with CIs and capable of providing insights into the neural underpinnings of effortful listening. We also proposed a novel approach to quantify “listening efficiency”, an integrated behavioural measure that reflects both intelligibility and listening effort. We successfully applied these assessments to 168 CI users and 75 age-matched normally hearing (NH) controls who were recruited throughout the project. We found that CI users experienced high levels of listening effort, even when their intelligibility was optimal under highly favourable listening conditions. Objective measures revealed that CI listeners exhibited significantly inferior listening efficiency than NH controls when listening to speech under moderate levels of cafeteria background noise and when attending online video calls. Physiologically, they showed elevated levels of arousal as revealed by larger and prolonged pupil dilations to baseline compared with NH controls, suggesting high cognitive load and increased need for recovery. The importance of visual cues was evident; the presence of video and captions benefited CI recipients by improving considerably their listening efficiency during online communication. These results were consistent with their subjective ratings of effort, both in the experiments and in daily life. These findings provide objective evidence of the cognitive burden endured by CI listeners in everyday life. In addition, the objective assessments proposed were proved feasible to quantify the performance and cognitive demands of listening through a CI. In particular, listening efficiency showed sensitivity to differences in task demands and between groups, even when intelligibility remained near perfect. We argue that listening efficiency holds potential to become a CI outcome measure

    About Face: Seeing the Talker Improves Spoken Word Recognition But Increases Listening Effort

    Get PDF
    It is widely accepted that seeing a talker improves a listener’s ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone—indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018)
    • …
    corecore