34 research outputs found

    Kernel convolution model for decoding sounds from time-varying neural responses

    Full text link
    In this study we present a kernel based convolution model to characterize neural responses to natural sounds by decoding their time-varying acoustic features. The model allows to decode natural sounds from high-dimensional neural recordings, such as magnetoencephalography (MEG), that track timing and location of human cortical signalling noninvasively across multiple channels. We used the MEG responses recorded from subjects listening to acoustically different environmental sounds. By decoding the stimulus frequencies from the responses, our model was able to accurately distinguish between two different sounds that it had never encountered before with 70% accuracy. Convolution models typically decode frequencies that appear at a certain time point in the sound signal by using neural responses from that time point until a certain fixed duration of the response. Using our model, we evaluated several fixed durations (time-lags) of the neural responses and observed auditory MEG responses to be most sensitive to spectral content of the sounds at time-lags of 250 ms to 500 ms. The proposed model should be useful for determining what aspects of natural sounds are represented by high-dimensional neural responses and may reveal novel properties of neural signals.Comment: 4 pages, Accepted at IEEE International Workshop on Pattern Recognition in Neuroimaging, Stanford, June 201

    Discrimination of Timbre in Early Auditory Responses of the Human Brain

    Get PDF
    The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG).Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1) – testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres.Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre

    Selective auditory attention within naturalistic scenes modulates reactivity to speech sounds

    Get PDF
    Rapid recognition and categorization of sounds are essential for humans and animals alike, both for understanding and reacting to our surroundings and for daily communication and social interaction. For humans, perception of speech sounds is of crucial importance. In real life, this task is complicated by the presence of a multitude of meaningful non-speech sounds. The present behavioural, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) study was set out to address how attention to speech versus attention to natural non-speech sounds within complex auditory scenes influences cortical processing. The stimuli were superimpositions of spoken words and environmental sounds, with parametric variation of the speech-to-environmental sound intensity ratio. The participants' task was to detect a repetition in either the speech or the environmental sound. We found that specifically when participants attended to speech within the superimposed stimuli, higher speech-to-environmental sound ratios resulted in shorter sustained MEG responses and stronger BOLD fMRI signals especially in the left supratemporal auditory cortex and in improved behavioural performance. No such effects of speech-to-environmental sound ratio were observed when participants attended to the environmental sound part within the exact same stimuli. These findings suggest stronger saliency of speech compared with other meaningful sounds during processing of natural auditory scenes, likely linked to speech-specific top-down and bottom-up mechanisms activated during speech perception that are needed for tracking speech in real-life-like auditory environments.Peer reviewe

    How reliable are the functional connectivity networks of MEG in resting states?

    Full text link
    We investigated the reliability of nodal network metrics of functional connectivity (FC) networks of magnetoencephalography (MEG) covering the whole brain at the sensor level in the eyes-closed (EC) and eyes-open (EO) resting states. Mutual information (MI) was employed as a measure of FC between sensors in theta, alpha, beta, and gamma frequency bands of MEG signals. MI matrices were assessed with three nodal network metrics, i.e., nodal degree (Dnodal), nodal efficiency (Enodal), and betweenness centrality (normBC). Intraclass correlation (ICC) values were calculated as a measure of reliability. We observed that the test-retest reliabilities of the resting states ranged from a poor to good level depending on the bands and metrics used for defining the nodal centrality. The dominant alpha-band FC network changes were the salient features of the state-related FC changes. The FC networks in the EO resting state showed greater reliability when assessed by Dnodal (maximum mean ICC = 0.655) and Enodal (maximum mean ICC = 0.604) metrics. The gamma-band FC network was less reliable than the theta, alpha, and beta networks across the nodal network metrics. However, the sensor-wise ICC values for the nodal centrality metrics were not uniformly distributed, that is, some sensors had high reliability. This study provides a sense of how the nodal centralities of the human resting state MEG are distributed at the sensor level and how reliable they are. It also provides a fundamental scientific background for continued examination of the resting state of human MEG. </jats:p

    Functional Cortical Hubs in the Eyes-Closed Resting Human Brain from an Electrophysiological Perspective Using Magnetoencephalography

    Get PDF
    <div><p>It is not clear whether specific brain areas act as hubs in the eyes-closed (EC) resting state, which is an unconstrained state free from any passive or active tasks. Here, we used electrophysiological magnetoencephalography (MEG) signals to study functional cortical hubs in 88 participants. We identified several multispectral cortical hubs. Although cortical hubs vary slightly with different applied measures and frequency bands, the most consistent hubs were observed in the medial and posterior cingulate cortex, the left dorsolateral superior frontal cortex, and the left pole of the middle temporal cortex. Hubs were characterized as connector nodes integrating EC resting state functional networks. Hubs in the gamma band were more likely to include midline structures. Our results confirm the existence of multispectral cortical cores in EC resting state functional networks based on MEG and imply the existence of optimized functional networks in the resting brain.</p></div

    Musical expectations enhance auditory cortical processing in musicians: a magnetoencephalography study

    No full text
    The present study investigated the influence of musical expectations on auditory representations in musicians and non-musicians using magnetoencephalography (MEG). Neuroscientific studies have demonstrated that musical syntax is processed in the inferior frontal gyri, eliciting an early right anterior negativity (ERAN), and anatomical evidence has shown that interconnections occur between the frontal cortex and the belt and parabelt regions in the auditory cortex (AC). Therefore, we anticipated that musical expectations would mediate neural activities in the AC via an efferent pathway. To test this hypothesis, we measured the auditory-evoked fields (AEFs) of seven musicians and seven non-musicians while they were listening to a five-chord progression in which the expectancy of the third chord was manipulated (highly expected, less expected, and unexpected). The results revealed that highly expected chords elicited shorter N1m (negative AEF at approximately 100 ms) and P2m (positive AEF at approximately 200 ms) latencies and larger P2m amplitudes in the AC than less-expected and unexpected chords. The relations between P2m amplitudes/latencies and harmonic expectations were similar between the groups; however, musicians&apos; results were more remarkable than those of non-musicians. These findings suggest that auditory cortical processing is enhanced by musical knowledge and long-term training in a top-down manner, which is reflected in shortened N1m and P2m latencies and enhanced P2m amplitudes in the AC. (C) 2017 IBRO. Published by Elsevier Ltd. All rights reserved.OAIID:RECH_ACHV_DSTSH_NO:T201734766RECH_ACHV_FG:RR00200001ADJUST_YN:EMP_ID:A003363CITE_RATE:3.382FILENAME:Neuroscience 369.pdfDEPT_NM:작곡과EMAIL:[email protected]_YN:YFILEURL:https://srnd.snu.ac.kr/eXrepEIR/fws/file/9a313746-767d-4f41-bedb-dcebe2729195/linkN

    Hubs with high efficiency.

    No full text
    <p>Hubs based on the aggregated ranking percent of each node across 88 participants and their topological maps projected into a cortical surface at the theta (A and B), alpha (C and D), beta (E and F), and gamma (G and H) bands obtained from Enodal estimation. The ranked distribution of aggregated ranking percent included only nonzero percent nodes, and the numbers in the topological maps denote the top 5 hub locations. Abbreviated notations for each node can be found in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0068192#pone-0068192-t001" target="_blank">Table 1</a>, and ‘_L’ and ‘_R’ denote the left and right hemispheres, respectively, at each node. The horizontal axes in A, C, E, and G indicate percentage (%).</p

    Ranked distribution of Dnodal (A, B), Enodal (C, D), normBC (E, F), and z-P (G, H).

    No full text
    <p>Shown are the top 20 hubs based on the aggregated ranking percent of each node across 88 participants and their topological maps projected into a cortical surface derived from Dnodal (A and B), Enodal (C and D), normBC (E and F), and z-P (G and H) measures irrespective of frequency bands. The most consistent hubs, F1, T2P, MCIN, and PCIN, are marked in each topological map. Horizontal axes in A, C, E, and G indicate percentage (%). The size of the filled circles is proportional to the corresponding percent, and the color indicates each frequency band (Theta: red, alpha: green, beta: yellow, gamma: blue).</p
    corecore