528 research outputs found

    Audiovisual integration of emotional signals from others' social interactions

    Get PDF
    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity

    Audiovisual speech perception in cochlear implant patients

    Get PDF
    Hearing with a cochlear implant (CI) is very different compared to a normal-hearing (NH) experience, as the CI can only provide limited auditory input. Nevertheless, the central auditory system is capable of learning how to interpret such limited auditory input such that it can extract meaningful information within a few months after implant switch-on. The capacity of the auditory cortex to adapt to new auditory stimuli is an example of intra-modal plasticity — changes within a sensory cortical region as a result of altered statistics of the respective sensory input. However, hearing deprivation before implantation and restoration of hearing capacities after implantation can also induce cross-modal plasticity — changes within a sensory cortical region as a result of altered statistics of a different sensory input. Thereby, a preserved cortical region can, for example, support a deprived cortical region, as in the case of CI users which have been shown to exhibit cross-modal visual-cortex activation for purely auditory stimuli. Before implantation, during the period of hearing deprivation, CI users typically rely on additional visual cues like lip-movements for understanding speech. Therefore, it has been suggested that CI users show a pronounced binding of the auditory and visual systems, which may allow them to integrate auditory and visual speech information more efficiently. The projects included in this thesis investigate auditory, and particularly audiovisual speech processing in CI users. Four event-related potential (ERP) studies approach the matter from different perspectives, each with a distinct focus. The first project investigates how audiovisually presented syllables are processed by CI users with bilateral hearing loss compared to NH controls. Previous ERP studies employing non-linguistic stimuli and studies using different neuroimaging techniques found distinct audiovisual interactions in CI users. However, the precise timecourse of cross-modal visual-cortex recruitment and enhanced audiovisual interaction for speech related stimuli is unknown. With our ERP study we fill this gap, and we present differences in the timecourse of audiovisual interactions as well as in cortical source configurations between CI users and NH controls. The second study focuses on auditory processing in single-sided deaf (SSD) CI users. SSD CI patients experience a maximally asymmetric hearing condition, as they have a CI on one ear and a contralateral NH ear. Despite the intact ear, several behavioural studies have demonstrated a variety of beneficial effects of restoring binaural hearing, but there are only few ERP studies which investigate auditory processing in SSD CI users. Our study investigates whether the side of implantation affects auditory processing and whether auditory processing via the NH ear of SSD CI users works similarly as in NH controls. Given the distinct hearing conditions of SSD CI users, the question arises whether there are any quantifiable differences between CI user with unilateral hearing loss and bilateral hearing loss. In general, ERP studies on SSD CI users are rather scarce, and there is no study on audiovisual processing in particular. Furthermore, there are no reports on lip-reading abilities of SSD CI users. To this end, in the third project we extend the first study by including SSD CI users as a third experimental group. The study discusses both differences and similarities between CI users with bilateral hearing loss and CI users with unilateral hearing loss as well as NH controls and provides — for the first time — insights into audiovisual interactions in SSD CI users. The fourth project investigates the influence of background noise on audiovisual interactions in CI users and whether a noise-reduction algorithm can modulate these interactions. It is known that in environments with competing background noise listeners generally rely more strongly on visual cues for understanding speech and that such situations are particularly difficult for CI users. As shown in previous auditory behavioural studies, the recently introduced noise-reduction algorithm "ForwardFocus" can be a useful aid in such cases. However, the questions whether employing the algorithm is beneficial in audiovisual conditions as well and whether using the algorithm has a measurable effect on cortical processing have not been investigated yet. In this ERP study, we address these questions with an auditory and audiovisual syllable discrimination task. Taken together, the projects included in this thesis contribute to a better understanding of auditory and especially audiovisual speech processing in CI users, revealing distinct processing strategies employed to overcome the limited input provided by a CI. The results have clinical implications, as they suggest that clinical hearing assessments, which are currently purely auditory, should be extended to audiovisual assessments. Furthermore, they imply that rehabilitation including audiovisual training methods may be beneficial for all CI user groups for quickly achieving the most effective CI implantation outcome

    Aspects of Joint Attention in Autism Spectrum Disorder: Links to Sensory Processing, Social Competence, Maternal Attention, and Contextual Factors

    Get PDF
    Background. Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by deficits in social interaction, communication, and restricted and repetitive behaviors (American Psychiatric Association, 2013). Given the heterogeneity of ASD it is important to understand individual differences within the disorder that are related to cognitive and language development, and how such differences may be related to differences in caregiver behavior or aspects of the social environment. Joint attention is an important component of early social communication and is considered to be a “core deficit” of ASD (Kasari, Freeman, Paparella, Wong, Kwon, & Gulsrud, 2005). Individual differences in joint attention during infancy have been shown to relate to language and cognitive development (Mundy, Block, Delgado, Pomares, Van Hecke, & Parlade, 2007; Nichols, Martin, & Fox, 2005). Therefore, joint attention serves an essential role in the study of child behavior within ASD across development. The present study consists of two manuscripts that explored how joint attention in children with ASD related to sensory responsiveness and social competence (Study 1), and how child joint attention related to mother attention and contextual factors (Study 2). Specifically, Study 1 investigated relations among children's sensory responses, dyadic orienting, joint attention, and their subsequent social competence with peers. Participants were 38 children (18 children with autism spectrum disorder (ASD) and 20 developmentally matched children with typical development) between the ages of 2.75 and 6.5 years. Observational coding was conducted to assess children's joint attention and dyadic orienting in a structured social communication task. Children's sensory responses and social competence were measured with parent report. Group differences were observed in children's joint attention, sensory responses, multisensory dyadic orienting, and social competence, with the ASD group showing significantly greater social impairment and sensory responses compared with their typical peers. Atypical sensory responses were negatively associated with individual differences on social competence subscales. Interaction effects were observed between diagnostic group and sensory responses with diagnostic group moderating the relation between sensory responses and both joint attention and social competence abilities. Study 2 investigated relations between child joint attention and mother attention during three social contexts (competing demands, teaching, and free play) among 44 children with ASD between the ages of 2.5 and 5.6 years, and their mothers. Observational coding was conducted to assess children’s joint attention and mother’s dyadic orienting. Children’s expressive and receptive language was measured by teacher report. The rate of children’s joint attention, and mothers’ dyadic orienting differed depending on the context of their interaction. Children’s joint attention, expressive and receptive language, age, and ASD severity, and mother dyadic orienting were related, and these relations differed by context. Child initiating joint attention (IJA) was also related to mother attention, and this relation was moderated by the child’s expressive and receptive language. A temporal contingency was revealed for the association between child IJA and mother attention with a bi-directional association such that child IJA predicted subsequent mother attention, and mother attention predicted subsequent child IJA. When the sample was split by children’s language ability (i.e., minimally-verbal and verbal groups) there was a group by receptive language, and a group by expressive language interaction on the contingency between child IJA and subsequent mother attention. Conclusion. The results from study 1 and study 2 suggest that individual differences in children with ASD, including their sensory responses and social competence, as well as mother attention and contextual factors are related to children’s joint attention. When addressing theory and interventions for children with ASD, it is important to consider children’s language and sensory sensitivities, the demands of the interactive context, and factors related to mother attention and approach to her child

    A possible neurophysiological correlate of audiovisual binding and unbinding in speech perception

    No full text
    International audienceAudiovisual (AV) speech integration of auditory and visual streams generally ends up in a fusion into a single percept. One classical example is the McGurk effect in which incongruent auditory and visual speech signals may lead to a fused percept different from either visual or auditory inputs. In a previous set of experiments, we showed that if a McGurk stimulus is preceded by an incongruent AV context (composed of incongruent auditory and visual speech materials) the amount of McGurk fusion is largely decreased. We interpreted this result in the framework of a two-stage " binding and fusion " model of AV speech perception, with an early AV binding stage controlling the fusion/decision process and likely to produce " unbinding " with less fusion if the context is incoherent. In order to provide further electrophysiological evidence for this binding/unbinding stage, early auditory evoked N1/P2 responses were here compared during auditory, congruent and incongruent AV speech perception, according to either prior coherent or incoherent AV contexts. Following the coherent context, in line with previous electroencephalographic/magnetoencephalographic studies, visual information in the congruent AV condition was found to modify auditory evoked potentials, with a latency decrease of P2 responses compared to the auditory condition. Importantly, both P2 amplitude and latency in the congruent AV condition increased from the coherent to the incoherent context. Although potential contamination by visual responses from the visual cortex cannot be discarded, our results might provide a possible neurophysiological correlate of early binding/unbinding process applied on AV interactions

    Electrophysiological differences and similarities in audiovisual speech processing in CI users with unilateral and bilateral hearing loss.

    Get PDF
    Hearing with a cochlear implant (CI) is limited compared to natural hearing. Although CI users may develop compensatory strategies, it is currently unknown whether these extend from auditory to visual functions, and whether compensatory strategies vary between different CI user groups. To better understand the experience-dependent contributions to multisensory plasticity in audiovisual speech perception, the current event-related potential (ERP) study presented syllables in auditory, visual, and audiovisual conditions to CI users with unilateral or bilateral hearing loss, as well as to normal-hearing (NH) controls. Behavioural results revealed shorter audiovisual response times compared to unisensory conditions for all groups. Multisensory integration was confirmed by electrical neuroimaging, including topographic and ERP source analysis, showing a visual modulation of the auditory-cortex response at N1 and P2 latency. However, CI users with bilateral hearing loss showed a distinct pattern of N1 topography, indicating a stronger visual impact on auditory speech processing compared to CI users with unilateral hearing loss and NH listeners. Furthermore, both CI user groups showed a delayed auditory-cortex activation and an additional recruitment of the visual cortex, and a better lip-reading ability compared to NH listeners. In sum, these results extend previous findings by showing distinct multisensory processes not only between NH listeners and CI users in general, but even between CI users with unilateral and bilateral hearing loss. However, the comparably enhanced lip-reading ability and visual-cortex activation in both CI user groups suggest that these visual improvements are evident regardless of the hearing status of the contralateral ear

    The timecourse of multisensory speech processing in unilaterally stimulated cochlear implant users revealed by ERPs.

    Get PDF
    A cochlear implant (CI) is an auditory prosthesis which can partially restore the auditory function in patients with severe to profound hearing loss. However, this bionic device provides only limited auditory information, and CI patients may compensate for this limitation by means of a stronger interaction between the auditory and visual system. To better understand the electrophysiological correlates of audiovisual speech perception, the present study used electroencephalography (EEG) and a redundant target paradigm. Postlingually deafened CI users and normal-hearing (NH) listeners were compared in auditory, visual and audiovisual speech conditions. The behavioural results revealed multisensory integration for both groups, as indicated by shortened response times for the audiovisual as compared to the two unisensory conditions. The analysis of the N1 and P2 event-related potentials (ERPs), including topographic and source analyses, confirmed a multisensory effect for both groups and showed a cortical auditory response which was modulated by the simultaneous processing of the visual stimulus. Nevertheless, the CI users in particular revealed a distinct pattern of N1 topography, pointing to a strong visual impact on auditory speech processing. Apart from these condition effects, the results revealed ERP differences between CI users and NH listeners, not only in N1/P2 ERP topographies, but also in the cortical source configuration. When compared to the NH listeners, the CI users showed an additional activation in the visual cortex at N1 latency, which was positively correlated with CI experience, and a delayed auditory-cortex activation with a reversed, rightward functional lateralisation. In sum, our behavioural and ERP findings demonstrate a clear audiovisual benefit for both groups, and a CI-specific alteration in cortical activation at N1 latency when auditory and visual input is combined. These cortical alterations may reflect a compensatory strategy to overcome the limited CI input, which allows the CI users to improve the lip-reading skills and to approximate the behavioural performance of NH listeners in audiovisual speech conditions. Our results are clinically relevant, as they highlight the importance of assessing the CI outcome not only in auditory-only, but also in audiovisual speech conditions

    Sensorimotor representation learning for an "active self" in robots: A model survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration
    corecore