27 research outputs found

    Brain responses in humans reveal ideal observer-like sensitivity to complex acoustic patterns

    Get PDF
    This study was funded by a Deafness Research UK fellowship and Wellcome Trust Project Grant 093292/Z/10/Z (to M.C.)

    Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Get PDF
    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naĂŻve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis.

    Functional neuroanatomy of speech signal decoding in primary progressive aphasias

    Get PDF
    This work was supported by the Alzheimer’s Society (AS-PG-16-007), the National Institute for Health Research University College London Hospitals Biomedical Research Centre (CBRC 161), the UCL Leonard Wolfson Experimental Neurology Centre (PR/ ylr/18575), and the Economic and Social Research Council (ES/ K006711/1). Individual authors were supported by the Medical Research Council (PhD Studentship to CJDH; MRC Clinician Scientist Fellowship to JDR), the Wolfson Foundation (Clinical Research Fellowship to CRM), the National Brain AppealeFrontotemporal Dementia Research Fund (CNC), Alzheimer’s Research UK (ARTSRF2010-3 to SJC), and the Wellcome Trust (091673/Z/10/Z to JDW)

    Auditory Pattern Detection

    No full text
    The work presented in this doctoral thesis uses behavioural methods and neuroimaging to investigate how human listeners detect patterns and statistical regularities in complex sound sequences. Temporal pattern analysis is essential to sensory processing, especially listening, since most auditory signals only have meaning as sequences over time. Previous evidence suggests that the brain is sensitive to the statistics of sensory stimulation. However, the process through which this sensitivity arises is largely unknown. This dissertation is organised as follows: Chapter 1 reviews fundamental principles of auditory scene analysis and existing models of regularity processing to constrain the scientific questions being addressed. Chapter 2 introduces the two neuroimaging techniques used in this work, magnetoencephalography (MEG) and functional Magnetic Resonance Imaging (fMRI). Chapters 3-6 are experimental sections. In Chapter 3, a novel stimulus is presented that allows probing listeners’ sensitivity to the emergence and disappearance of complex acoustic patterns. Pattern detection performance is evaluated behaviourally, and systematically compared with the predictions of an ideal observer model. Chapters 4 and 5 describe the brain responses measured during processing of those complex regularities using MEG and fMRI, respectively. Chapter 6 presents an extension of the main behavioural task to the visual domain, which allows pattern detection to be compared in audition and vision. Chapter 7 concludes with a general discussion of the experimental results and provides directions for future research. Overall, the results are consistent with predictive coding accounts of perceptual inference and provide novel neurophysiological evidence for the brain's exquisite sensitivity to stimulus context and its capacity to encode high-order structure in sensory signals

    The cumulative effects of predictability on synaptic gain in the auditory processing stream

    No full text
    Stimulus predictability can lead to substantial modulations of brain activity, such as shifts in sustained magnetic field amplitude, measured with magnetoencephalography. Here, we provide a mechanistic explanation of these effects using MEG data acquired from healthy human volunteers (N=13, 7 female). In a source-level analysis of induced responses, we established the effects of orthogonal predictability manipulations of rapid tone-pip sequences (namely, sequence regularity and alphabet size) along the auditory processing stream. In auditory cortex, regular sequences with smaller alphabets induced greater gamma activity. Furthermore, sequence regularity shifted induced activity in frontal regions towards higher frequencies. To model these effects in terms of the underlying neurophysiology, we used dynamic causal modelling for cross-spectral density and estimated slow fluctuations in neural (postsynaptic) gain. Using the model-based parameters, we accurately explain the sensor-level sustained field amplitude, demonstrating that slow changes in synaptic efficacy — combined with sustained sensory input — can result in profound and sustained effects on neural responses to predictable sensory streams

    Neural correlates of auditory figure-ground segregation based on temporal coherence.

    No full text
    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditoryfigure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochasticfigure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic“ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms afterfigure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.

    Neural correlates of auditory figure-ground segregation based on temporal coherence

    No full text
    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naĂŻve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis.
    corecore