128 research outputs found

    Auditory Neuroscience: Unravelling How the Brain Gives Sound Meaning

    Get PDF
    The brain must be able to assign sounds in the world to behaviourally meaningful categories. A new study has revealed that sensory pathways represent category information, but that selectivity for sound classes emerges first in the frontal cortex

    Non-auditory processing in the central auditory pathway

    Get PDF
    Multisensory responses have been observed throughout the central auditory pathway, yet the origin, function and perceptual consequences of cross-modal integration remain unresolved. Recent studies have applied modern neuroanatomical and functional perturbation techniques to dissect the circuits that might enable multisensory information to sculpt the processing of sound. These highlight in particular the role that subcortical pathways might play in relaying multisensory information to, and between, sensory cortical fields. We also examine the consequences of integrating non-auditory information into auditory processing, identifying key areas where this may be critical for successful listening and potential roles for visual information in augmenting auditory scene analysis, and for non-auditory information in facilitating coordinate frame transformations

    Erratum: Relative sound localisation abilities in human listeners [J. Acoust. Soc. Am. 138, 674–686 (2015)]

    Get PDF

    Physiology of Higher Central Auditory Processing and Plasticity

    Get PDF
    Binaural cue processing requires central auditory function as damage to the auditory cortex and other cortical regions impairs sound localization. Sound localization cues are initially extracted by brainstem nuclei, but how the cerebral cortex supports spatial sound perception remains unclear. This chapter reviews the evidence that spatial encoding within and beyond the auditory cortex supports sound localization, including the integration of information across sound frequencies and localization cues. In particular, this chapter discusses the role of brain regions across the cerebral cortex that may be specialized for extracting and transforming the spatial aspects of sounds and extends from sensory to parietal and prefrontal cortices. The chapter considers how the encoding of spatial information changes with attention and how spatial processing fits within the broader context of auditory scene analysis by cortical networks. The importance of neural plasticity in binaural processing is outlined, including a discussion of how changes in the mapping of localization cues to spatial position allow listeners to adapt to changes in auditory input throughout life and after hearing loss. The chapter ends by summarizing some of the open questions about the central processing of binaural cues and how they may be answered

    Training enhances the ability of listeners to exploit visual information for auditory scene analysis

    Get PDF
    The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration

    Where are multisensory signals combined for perceptual decision-making?

    Get PDF
    Multisensory integration is observed in many subcortical and cortical locations including primary and non-primary sensory cortex, and higher cortical areas including frontal and parietal cortex. During unisensory perceptual tasks many of these same brain areas show neural signatures associated with decision-making. It is unclear whether multisensory representations in sensory cortex directly inform decision-making in a multisensory task, or if cross-modal signals are only combined after the accumulation of unisensory evidence at a final decision-making stage in higher cortical areas. Manipulations of neuronal activity are required to establish causal roles for given brain regions in multisensory perceptual decision-making, and so far indicate that distributed networks underlie multisensory decision-making. Understanding multisensory integration requires synthesis of small-scale pathway specific and large-scale network level manipulations

    Defining Auditory-Visual Objects: Behavioral Tests and Physiological Mechanisms

    Get PDF
    Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects

    An Object-Based Interpretation of Audiovisual Processing

    Get PDF
    Visual cues help listeners follow conversation in a complex acoustic environment. Many audiovisual research studies focus on how sensory cues are combined to optimize perception, either in terms of minimizing the uncertainty in the sensory estimate or maximizing intelligibility, particularly in speech understanding. From an auditory perception perspective, a fundamental question that has not been fully addressed is how visual information aids the ability to select and focus on one auditory object in the presence of competing sounds in a busy auditory scene. In this chapter, audiovisual integration is presented from an object-based attention viewpoint. In particular, it is argued that a stricter delineation of the concepts of multisensory integration versus binding would facilitate a deeper understanding of the nature of how information is combined across senses. Furthermore, using an object-based theoretical framework to distinguish binding as a distinct form of multisensory integration generates testable hypotheses with behavioral predictions that can account for different aspects of multisensory interactions. In this chapter, classic multisensory illusion paradigms are revisited and discussed in the context of multisensory binding. The chapter also describes multisensory experiments that focus on addressing how visual stimuli help listeners parse complex auditory scenes. Finally, it concludes with a discussion of the potential mechanisms by which audiovisual processing might resolve competition between concurrent sounds in order to solve the cocktail party problem

    What can we learn from inactivation studies? Lessons from auditory cortex

    Get PDF
    Wide variation in the outcome of auditory cortex inactivation has been an impediment to clear conclusions regarding the roles of the auditory cortex in behaviour. Inactivation methods differ in their efficacy and specificity. The likelihood of observing a behavioural deficit is additionally influenced by factors such as the species being used, task design and reward. A synthesis of previous results suggests that auditory cortex involvement is critical for tasks that require integrating across multiple stimulus features, and less likely to be critical for simple feature discriminations. New methods of neural silencing provide opportunities for spatially and temporally precise manipulation of activity, allowing perturbation of individual subfields and specific circuits

    Audiovisual integration in macaque face patch neurons

    Get PDF
    Primate social communication depends on the perceptual integration of visual and auditory cues, reflected in the multimodal mixing of sensory signals in certain cortical areas. The macaque cortical face patch network, identified through visual, face-selective responses measured with fMRI, is assumed to contribute to visual social interactions. However, whether face patch neurons are also influenced by acoustic information, such as the auditory component of a natural vocalization, remains unknown. Here, we recorded single-unit activity in the anterior fundus (AF) face patch, in the superior temporal sulcus, and anterior medial (AM) face patch, on the undersurface of the temporal lobe, in macaques presented with audiovisual, visual-only, and auditory-only renditions of natural movies of macaques vocalizing. The results revealed that 76% of neurons in face patch AF were significantly influenced by the auditory component of the movie, most often through enhancement of visual responses but sometimes in response to the auditory stimulus alone. By contrast, few neurons in face patch AM exhibited significant auditory responses or modulation. Control experiments in AF used an animated macaque avatar to demonstrate, first, that the structural elements of the face were often essential for audiovisual modulation and, second, that the temporal modulation of the acoustic stimulus was more important than its frequency spectrum. Together, these results identify a striking contrast between two face patches and specifically identify AF as playing a potential role in the integration of audiovisual cues during natural modes of social communication
    • …
    corecore