195 research outputs found

    Neuronal Mechanisms and Transformations Encoding Time-Varying Signals

    Get PDF
    Sensation in natural environments requires the analysis of time-varying signals. While previous work has uncovered how a signal’s temporal rate is represented by neurons in sensory cortex, in this issue of Neuron, new evidence from Gao et al. (2016) provides insights on the underlying mechanisms

    Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm

    Get PDF
    Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans

    Sequence learning modulates neural responses and oscillatory coupling in human and monkey auditory cortex

    Get PDF
    Learning complex ordering relationships between sensory events in a sequence is fundamental for animal perception and human communication. While it is known that rhythmic sensory events can entrain brain oscillations at different frequencies, how learning and prior experience with sequencing relationships affect neocortical oscillations and neuronal responses is poorly understood. We used an implicit sequence learning paradigm (an “artificial grammar”) in which humans and monkeys were exposed to sequences of nonsense words with regularities in the ordering relationships between the words. We then recorded neural responses directly from the auditory cortex in both species in response to novel legal sequences or ones violating specific ordering relationships. Neural oscillations in both monkeys and humans in response to the nonsense word sequences show strikingly similar hierarchically nested low-frequency phase and high-gamma amplitude coupling, establishing this form of oscillatory coupling—previously associated with speech processing in the human auditory cortex—as an evolutionarily conserved biological process. Moreover, learned ordering relationships modulate the observed form of neural oscillatory coupling in both species, with temporally distinct neural oscillatory effects that appear to coordinate neuronal responses in the monkeys. This study identifies the conserved auditory cortical neural signatures involved in monitoring learned sequencing operations, evident as modulations of transient coupling and neuronal responses to temporally structured sensory input

    Unified ethical principles and an animal research ‘Helsinki’ declaration as foundations for international collaboration

    Get PDF
    Ethical frameworks are the foundation for any research with humans or nonhuman animals. Human research is guided by overarching international ethical principles, such as those defined in the Helsinki Declaration by the World Medical Association. However, for nonhuman animal research, because there are several sets of ethical principles and national frameworks, it is commonly thought that there is substantial variability in animal research approaches internationally and a lack of an animal research ‘Helsinki Declaration’, or the basis for one. We first overview several prominent sets of ethical principles, including the 3Rs, 3Ss, 3Vs, 4Fs and 6Ps. Then using the 3Rs principles, originally proposed by Russell & Burch, we critically assess them, asking if they can be Replaced, Reduced or Refined. We find that the 3Rs principles have survived several replacement challenges, and the different sets of principles (3Ss, 3Vs, 4Fs and 6Ps) are complementary, a natural refinement of the 3Rs and are ripe for integration into a unified set of principles, as proposed here. We also overview international frameworks and documents, many of which incorporate the 3Rs, including the Basel Declaration on animal research. Finally, we propose that the available animal research guidance documents across countries can be consolidated, to provide a similar structure as seen in the Helsinki Declaration, potentially as part of an amended Basel Declaration on animal research. In summary, we observe substantially greater agreement on and the possibility for unification of the sets of ethical principles and documents that can guide animal research internationally

    Mapping effective connectivity in the human brain with concurrent intracranial electrical stimulation and BOLD-fMRI

    Get PDF
    BACKGROUND: Understanding brain function requires knowledge of how one brain region causally influences another. This information is difficult to obtain directly in the human brain, and is instead typically inferred from resting-state fMRI. NEW METHOD: Here, we demonstrate the safety and scientific promise of a novel and complementary approach: concurrent electrical stimulation and fMRI (es-fMRI) at 3 T in awake neurosurgical patients with implanted depth electrodes. RESULTS: We document the results of safety testing, actual experimental setup, and stimulation parameters, that safely and reliably evoke activation in distal structures through stimulation of amygdala, cingulate, or prefrontal cortex. We compare connectivity inferred from the evoked patterns of activation with that estimated from standard resting-state fMRI in the same patients: while connectivity patterns obtained with each approach are correlated, each method produces unique results. Response patterns were stable over the course of 11 min of es-fMRI runs. COMPARISON WITH EXISTING METHOD: es-fMRI in awake humans yields unique information about effective connectivity, complementing resting-state fMRI. Although our stimulations were below the level of inducing any apparent behavioral or perceptual effects, a next step would be to use es-fMRI to modulate task performances. This would reveal the acute network-level changes induced by the stimulation that mediate the behavioral and cognitive effects seen with brain stimulation. CONCLUSIONS: es-fMRI provides a novel and safe approach for mapping effective connectivity in the human brain in a clinical setting, and will inform treatments for psychiatric and neurodegenerative disorders that use deep brain stimulation

    A taxonomy for vocal learning

    Get PDF
    Funding: ONR grant no. N00014-18-1-2062 and the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland). MASTS is funded by the Scottish Funding Council (grant no. HR09011) and contributing institutions.Humans and songbirds learn to sing or speak by listening to acoustic models, forming auditory templates, and then learning to produce vocalizations that match the templates. These taxa have evolved specialized telencephalic pathways to accomplish this complex form of vocal learning, which has been reported for very few other taxa. By contrast, the acoustic structure of most animal vocalizations is produced by species-specific vocal motor programmes in the brainstem that do not require auditory feedback. However, many mammals and birds can learn to fine-tune the acoustic features of inherited vocal motor patterns based upon listening to conspecifics or noise. These limited forms of vocal learning range from rapid alteration based on real-time auditory feedback to long-term changes of vocal repertoire and they may involve different mechanisms than complex vocal learning. Limited vocal learning can involve the brainstem, mid-brain and/or telencephalic networks. Understanding complex vocal learning, which underpins human speech, requires careful analysis of which species are capable of which forms of vocal learning. Selecting multiple animal models for comparing the neural pathways that generate these different forms of learning will provide a richer view of the evolution of complex vocal learning and the neural mechanisms that make it possible. This article is part of the theme issue 'What can animal communication teach us about human language?'Publisher PDFPeer reviewe

    Numbers in the Blind's “Eye”

    Get PDF
    Background: Although lacking visual experience with numerosities, recent evidence shows that the blind perform similarly to sighted persons on numerical comparison or parity judgement tasks. In particular, on tasks presented in the auditory modality, the blind surprisingly show the same effect that appears in sighted persons, demonstrating that numbers are represented through a spatial code, i.e. the Spatial-Numerical Association of Response Codes (SNARC) effect. But, if this is the case, how is this numerical spatial representation processed in the brain of the blind? Principal Findings: Here we report that, although blind and sighted people have similarly organized numerical representations, the attentional shifts generated by numbers have different electrophysiological correlates (sensorial N100 in the sighted and cognitive P300 in the blind). Conclusions: These results highlight possible differences in the use of spatial representations acquired through modalities other than vision in the blind population

    Hemispheric Asymmetries in Speech Perception: Sense, Nonsense and Modulations

    Get PDF
    Background: The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding 'rapid temporal processing'.Methodology: A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech) which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET) was used to compare which brain regions were active when participants listened to the different sounds.Conclusions: Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible) was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features
    corecore