159 research outputs found

    ELAN: A Software Package for Analysis and Visualization of MEG, EEG, and LFP Signals

    Get PDF
    The recent surge in computational power has led to extensive methodological developments and advanced signal processing techniques that play a pivotal role in neuroscience. In particular, the field of brain signal analysis has witnessed a strong trend towards multidimensional analysis of large data sets, for example, single-trial time-frequency analysis of high spatiotemporal resolution recordings. Here, we describe the freely available ELAN software package which provides a wide range of signal analysis tools for electrophysiological data including scalp electroencephalography (EEG), magnetoencephalography (MEG), intracranial EEG, and local field potentials (LFPs). The ELAN toolbox is based on 25 years of methodological developments at the Brain Dynamics and Cognition Laboratory in Lyon and was used in many papers including the very first studies of time-frequency analysis of EEG data exploring evoked and induced oscillatory activities in humans. This paper provides an overview of the concepts and functionalities of ELAN, highlights its specificities, and describes its complementarity and interoperability with other toolboxes

    Pitch-Responsive Cortical Regions in Congenital Amusia

    Get PDF
    Congenital amusia is a lifelong deficit in music perception thought to reflect an underlying impairment in the perception and memory of pitch. The neural basis of amusic impairments is actively debated. Some prior studies have suggested that amusia stems from impaired connectivity between auditory and frontal cortex. However, it remains possible that impairments in pitch coding within auditory cortex also contribute to the disorder, in part because prior studies have not measured responses from the cortical regions most implicated in pitch perception in normal individuals. We addressed this question by measuring fMRI responses in 11 subjects with amusia and 11 age- and education-matched controls to a stimulus contrast that reliably identifies pitch-responsive regions in normal individuals: harmonic tones versus frequency-matched noise. Our findings demonstrate that amusic individuals with a substantial pitch perception deficit exhibit clusters of pitch-responsive voxels that are comparable in extent, selectivity, and anatomical location to those of control participants. We discuss possible explanations for why amusics might be impaired at perceiving pitch relations despite exhibiting normal fMRI responses to pitch in their auditory cortex: (1) individual neurons within the pitch-responsive region might exhibit abnormal tuning or temporal coding not detectable with fMRI, (2) anatomical tracts that link pitch-responsive regions to other brain areas (e.g., frontal cortex) might be altered, and (3) cortical regions outside of pitch-responsive cortex might be abnormal. The ability to identify pitch-responsive regions in individual amusic subjects will make it possible to ask more precise questions about their role in amusia in future work. SIGNIFICANCE STATEMENT The neural causes of congenital amusia, a lifelong deficit in pitch and music perception, are not fully understood. We tested the hypothesis that amusia is due to abnormalities in brain regions that respond selectively to sounds with a pitch in normal listeners. Surprisingly, amusic individuals exhibited pitch-responsive regions that were similar to normal-hearing controls in extent, selectivity, and anatomical location. We discuss how our results inform current debates on the neural basis of amusia and how the ability to identify pitch-responsive regions in amusic subjects will make it possible to ask more precise questions about their role in amusic deficits

    Dynamics of oddball sound processing: Trial-by-trial modeling of ECoG signals

    Get PDF
    Recent computational models of perception conceptualize auditory oddball responses as signatures of a (Bayesian) learning process, in line with the influential view of the mismatch negativity (MMN) as a prediction error signal. Novel MMN experimental paradigms have put an emphasis on neurophysiological effects of manipulating regularity and predictability in sound sequences. This raises the question of the contextual adaptation of the learning process itself, which on the computational side speaks to the mechanisms of gain-modulated (or precision-weighted) prediction error. In this study using electrocorticographic (ECoG) signals, we manipulated the predictability of oddball sound sequences with two objectives: (i) Uncovering the computational process underlying trial-by-trial variations of the cortical responses. The fluctuations between trials, generally ignored by approaches based on averaged evoked responses, should reflect the learning involved. We used a general linear model (GLM) and Bayesian Model Reduction (BMR) to assess the respective contributions of experimental manipulations and learning mechanisms under probabilistic assumptions. (ii) To validate and expand on previous findings regarding the effect of changes in predictability using simultaneous EEG-MEG recordings. Our trial-by-trial analysis revealed only a few stimulus-responsive sensors but the measured effects appear to be consistent over subjects in both time and space. In time, they occur at the typical latency of the MMN (between 100 and 250 ms post-stimulus). In space, we found a dissociation between time-independent effects in more anterior temporal locations and time-dependent (learning) effects in more posterior locations. However, we could not observe any clear and reliable effect of our manipulation of predictability modulation onto the above learning process. Overall, these findings clearly demonstrate the potential of trial-to-trial modeling to unravel perceptual learning processes and their neurophysiological counterparts

    Approaches to the cortical analysis of auditory objects

    Get PDF
    We describe work that addresses the cortical basis for the analysis of auditory objects using ‘generic’ sounds that do not correspond to any particular events or sources (like vowels or voices) that have semantic association. The experiments involve the manipulation of synthetic sounds to produce systematic changes of stimulus features, such as spectral envelope. Conventional analyses of normal functional imaging data demonstrate that the analysis of spectral envelope and perceived timbral change involves a network consisting of planum temporale (PT) bilaterally and the right superior temporal sulcus (STS). Further analysis of imaging data using dynamic causal modelling (DCM) and Bayesian model selection was carried out in the right hemisphere areas to determine the effective connectivity between these auditory areas. Specifically, the objective was to determine if the analysis of spectral envelope in the network is done in a serial fashion (that is from HG to PT to STS) or parallel fashion (that is PT and STS receives input from HG simultaneously). Two families of models, serial and parallel (16 in total) that represent different hypotheses about the connectivity between HG, PT and STS were selected. The models within a family differ with respect to the pathway that is modulated by the analysis of spectral envelope. After the models are identified, Bayesian model selection procedure is then used to select the ‘optimal’ model from the specified models. The data strongly support a particular serial model containing modulation of the HG to PT effective connectivity during spectral envelope variation. Parallel work in neurological subjects addresses the effect of lesions to different parts of this network. We have recently studied in detail subjects with ‘dystimbria’: an alteration in the perceived quality of auditory objects distinct from pitch or loudness change. The subjects have lesions of the normal network described above with normal perception of pitch strength but abnormal perception of the analysis of spectral envelope change

    Changes in Early Cortical Visual Processing Predict Enhanced Reactivity in Deaf Individuals

    Get PDF
    Individuals with profound deafness rely critically on vision to interact with their environment. Improvement of visual performance as a consequence of auditory deprivation is assumed to result from cross-modal changes occurring in late stages of visual processing. Here we measured reaction times and event-related potentials (ERPs) in profoundly deaf adults and hearing controls during a speeded visual detection task, to assess to what extent the enhanced reactivity of deaf individuals could reflect plastic changes in the early cortical processing of the stimulus. We found that deaf subjects were faster than hearing controls at detecting the visual targets, regardless of their location in the visual field (peripheral or peri-foveal). This behavioural facilitation was associated with ERP changes starting from the first detectable response in the striate cortex (C1 component) at about 80 ms after stimulus onset, and in the P1 complex (100–150 ms). In addition, we found that P1 peak amplitudes predicted the response times in deaf subjects, whereas in hearing individuals visual reactivity and ERP amplitudes correlated only at later stages of processing. These findings show that long-term auditory deprivation can profoundly alter visual processing from the earliest cortical stages. Furthermore, our results provide the first evidence of a co-variation between modified brain activity (cortical plasticity) and behavioural enhancement in this sensory-deprived population

    The Timbre Perception Test (TPT): A new interactive musical assessment tool to measure timbre perception ability

    Get PDF
    To date, tests that measure individual differences in the ability to perceive musical timbre are scarce in the published literature.The lack of such tool limits research on how timbre, a primary attribute of sound, is perceived and processed among individuals.The current paper describes the development of the Timbre Perception Test (TPT), in which participants use a slider to reproduce heard auditory stimuli that vary along three important dimensions of timbre: envelope, spectral flux, and spectral centroid. With a sample of 95 participants, the TPT was calibrated and validated against measures of related abilities and examined for its reliability. The results indicate that a short-version (8 minutes) of the TPT has good explanatory support from a factor analysis model, acceptable internal reliability (α=.69,ωt = .70), good test–retest reliability (r= .79) and substantial correlations with self-reported general musical sophistication (ρ= .63) and pitch discrimination (ρ= .56), as well as somewhat lower correlations with duration discrimination (ρ= .27), and musical instrument discrimination abilities (ρ= .33). Overall, the TPT represents a robust tool to measure an individual’s timbre perception ability. Furthermore, the use of sliders to perform a reproductive task has shown to be an effective approach in threshold testing. The current version of the TPT is openly available for research purposes

    Enhanced Syllable Discrimination Thresholds in Musicians

    Get PDF
    Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.Grammy FoundationWilliam F. Milton Fun

    Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

    Get PDF
    To the extent that sensorineural systems are efficient, redundancy should be extracted to optimize transmission of information, but perceptual evidence for this has been limited. Stilp and colleagues recently reported efficient coding of robust correlation (r = .97) among complex acoustic attributes (attack/decay, spectral shape) in novel sounds. Discrimination of sounds orthogonal to the correlation was initially inferior but later comparable to that of sounds obeying the correlation. These effects were attenuated for less-correlated stimuli (r = .54) for reasons that are unclear. Here, statistical properties of correlation among acoustic attributes essential for perceptual organization are investigated. Overall, simple strength of the principal correlation is inadequate to predict listener performance. Initial superiority of discrimination for statistically consistent sound pairs was relatively insensitive to decreased physical acoustic/psychoacoustic range of evidence supporting the correlation, and to more frequent presentations of the same orthogonal test pairs. However, increased range supporting an orthogonal dimension has substantial effects upon perceptual organization. Connectionist simulations and Eigenvalues from closed-form calculations of principal components analysis (PCA) reveal that perceptual organization is near-optimally weighted to shared versus unshared covariance in experienced sound distributions. Implications of reduced perceptual dimensionality for speech perception and plausible neural substrates are discussed
    corecore