20 research outputs found

    Extensive Tonotopic Mapping across Auditory Cortex Is recapitulated by spectrally directed attention and systematically related to Cortical Myeloarchitecture

    Get PDF
    Auditory selective attention is vital in natural soundscapes. But, it is unclear how attentional focus on the primary dimension of auditory representation - acoustic frequency - might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically-estimated auditory core, and across the majority of tonotopically-mapped non-primary auditory cortex. The attentionally-driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically-mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization

    Late development of cue integration is linked to sensory fusion in cortex

    Get PDF
    Adults optimize perceptual judgements by integrating different types of sensory information [ 1, 2 ]. This engages specialized neural circuits that fuse signals from the same [ 3–5 ] or different [ 6 ] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [ 7–9 ]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [ 10 ]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [ 3–5 ]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [ 11, 12 ], the brain circuits that fuse cues take a very long time to develop

    The Human Homologue of Macaque Area V6A

    Get PDF
    In macaque monkeys, V6A is a visuomotor area located in the anterior bank of the POs, dorsal and anterior to retinotopically-organized extrastriate area V6 (Galletti et al 1996). Unlike V6, V6A represents both contra- and ipsilateral visual fields and is broadly retinotopically organized (Galletti et al 1999b). The contralateral lower visual field is over-represented in V6A. The central 20°-30° of the visual field are mainly represented dorsally (V6Ad) and the periphery ventrally (V6Av), at the border with V6. Both sectors of area V6A contain arm movement-related cells, active during spatially-directed reaching movements (Gamberini et al., 2011). In humans, we previously mapped the retinotopic organization of area V6 (Pitzalis et al., 2006). Here, using phase-encoded fMRI, cortical surface-based analysis and wide-field retinotopic mapping, we define a new cortical region that borders V6 anteriorly and shows a clear over-representation of the contralateral lower visual field and of the periphery. As with macaque V6A, the eccentricity increases moving ventrally within the area. The new region contains a non-mirror-image representation of the visual field. Functional mapping reveals that, as in macaque V6A, the new region, but not the nearby area V6, responds during finger pointing and reaching movements. Based on similarity in position, retinotopic properties, functional organization and relationship with the neighbouring extrastriate visual areas, we propose that the new cortical region is the human homologue of macaque area V6A

    Does congenital deafness affect the structural and functional architecture of primary visual cortex?

    Get PDF
    Deafness results in greater reliance on the remaining senses. It is unknown whether the cortical architecture of the intact senses is optimized to compensate for lost input. Here we performed widefield population receptive field (pRF) mapping of primary visual cortex (V1) with functional magnetic resonance imaging (fMRI) in hearing and congenitally deaf participants, all of whom had learnt sign language after the age of 10 years. We found larger pRFs encoding the peripheral visual field of deaf compared to hearing participants. This was likely driven by larger facilitatory center zones of the pRF profile concentrated in the near and far periphery in the deaf group. pRF density was comparable between groups, indicating pRFs overlapped more in the deaf group. This could suggest that a coarse coding strategy underlies enhanced peripheral visual skills in deaf people. Cortical thickness was also decreased in V1 in the deaf group. These findings suggest deafness causes structural and functional plasticity at the earliest stages of visual cortex

    Constraining the electric charges of some astronomical bodies in Reissner-Nordstrom spacetimes and generic r^-2-type power-law potentials from orbital motions

    Full text link
    We put model-independent, dynamical constraints on the net electric charge Q of some astronomical and astrophysical objects by assuming that their exterior spacetimes are described by the Reissner-Nordstroem metric, which induces an additional potential U_RN \propto Q^2 r^-2. Our results extend to other hypothetical power-law interactions inducing extra-potentials U_pert = r^-2 as well (abridged).Comment: LaTex2e, 16 pages, 3 figures, no tables, 128 references. Version matching the one at press in General Relativity and Gravitation (GRG). arXiv admin note: substantial text overlap with arXiv:1112.351

    Establishing a time-line of word recognition: evidence from eye movements and event-related potentials

    No full text
    The average duration of eye fixations in reading places constraints on the time for lexical processing. Data from event related potential (ERP) studies of word recognition can illuminate stages of processing within a single fixation on a word. In the present study, high and low frequency regular and exception words were used as targets in an eye movement reading experiment and a high-density electrode ERP lexical decision experiment. Effects of lexicality (words vs pseudowords vs consonant strings), word frequency (high vs low frequency) and word regularity (regular vs exception spelling-sound correspondence) were examined. Results suggest a very early time-course for these aspects of lexical processing within the context of a single eye fixation

    Parietal and superior frontal visuospatial maps activated by pointing and saccades

    No full text
    A recent study from our laboratory demonstrated that parietal cortex contains a map of visual space related to saccades and spatial attention and identified this area as the likely human homologue of the lateral intraparietal (LIP). A human homologue for the parietal reach region (PRR), thought to preferentially encode planned hand movements, has also been recently proposed. Both of these areas, originally identified in the macaque monkey, have been shown to encode space with eye-centered coordinates. Functional magnetic resonance imaging (fMRI) of humans was used to test the hypothesis that the putative human PRR contains a retinotopic map recruited by finger pointing but not saccades and to test more generally for differences in the visuospatial maps recruited by pointing and saccades. We identified multiple maps in both posterior parietal cortex and superior frontal cortex recruited for eye and hand movements, including maps not observed in previous mapping studies. Pointing and saccade maps were generally consistent within single subjects. We have developed new group analysis methods for phase-encoded data, which revealed subtle differences between pointing and saccades, including hemispheric asymmetries, but we did not find evidence of pointing-specific maps of visual space

    Statistical Shape Modeling of Unfolded Retinotopic Maps for a Visual Areas Probabilistic Atlas

    No full text
    This paper proposes a statistical model of functional landmarks delimiting low level visual areas which are highly variable across individuals. Low level visual areas are first precisely delineated by fMRI retinotopic mapping which provides detailed information about the correspondence between the visual field and its cortical representation. The model is then built by learning the variability within a given training set. It relies on an appropriate data representation and on the definition of an intrinsic coordinate system to a visual map enabling to build a consistent training set on which a principal components analysis (PCA) is eventually applied. Our approach constitutes a first step toward a functional landmark-based probabilistic atlas of low level visual areas

    Neuroanatomy, circuitry and plasticity of word reading

    No full text
    The use of neuroimaging has provided a basis for suggesting the brain areas active during reading of words and sentences. When combined with high density electrical recording from the scalp, it is possible to obtain information on the time course of activation of these brain areas and compare them with the temporal structure of reading from studies of eye movements. The paper summarizes results in these areas and suggests how acquisition and practice of the skill might alter the circuitry involved
    corecore