900 research outputs found

    Speech rhythms and multiplexed oscillatory sensory coding in the human brain

    Get PDF
    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations

    Connecting the Brain to Itself through an Emulation.

    Get PDF
    Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions

    The temporal pattern of impulses in primary afferents analogously encodes touch and hearing information

    Full text link
    An open question in neuroscience is the contribution of temporal relations between individual impulses in primary afferents in conveying sensory information. We investigated this question in touch and hearing, while looking for any shared coding scheme. In both systems, we artificially induced temporally diverse afferent impulse trains and probed the evoked perceptions in human subjects using psychophysical techniques. First, we investigated whether the temporal structure of a fixed number of impulses conveys information about the magnitude of tactile intensity. We found that clustering the impulses into periodic bursts elicited graded increases of intensity as a function of burst impulse count, even though fewer afferents were recruited throughout the longer bursts. The interval between successive bursts of peripheral neural activity (the burst-gap) has been demonstrated in our lab to be the most prominent temporal feature for coding skin vibration frequency, as opposed to either spike rate or periodicity. Given the similarities between tactile and auditory systems, second, we explored the auditory system for an equivalent neural coding strategy. By using brief acoustic pulses, we showed that the burst-gap is a shared temporal code for pitch perception between the modalities. Following this evidence of parallels in temporal frequency processing, we next assessed the perceptual frequency equivalence between the two modalities using auditory and tactile pulse stimuli of simple and complex temporal features in cross-sensory frequency discrimination experiments. Identical temporal stimulation patterns in tactile and auditory afferents produced equivalent perceived frequencies, suggesting an analogous temporal frequency computation mechanism. The new insights into encoding tactile intensity through clustering of fixed charge electric pulses into bursts suggest a novel approach to convey varying contact forces to neural interface users, requiring no modulation of either stimulation current or base pulse frequency. Increasing control of the temporal patterning of pulses in cochlear implant users might improve pitch perception and speech comprehension. The perceptual correspondence between touch and hearing not only suggests the possibility of establishing cross-modal comparison standards for robust psychophysical investigations, but also supports the plausibility of cross-sensory substitution devices

    Global timing: a conceptual framework to investigate the neural basis of rhythm perception in humans and non-human species

    Get PDF
    Timing cues are an essential feature of music. To understand how the brain gives rise to our experience of music we must appreciate how acoustical temporal patterns are integrated over the range of several seconds in order to extract global timing. In music perception, global timing comprises three distinct but often interacting percepts: temporal grouping, beat, and tempo. What directions may we take to further elucidate where and how the global timing of music is processed in the brain? The present perspective addresses this question and describes our current understanding of the neural basis of global timing perception

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components
    corecore