16,057 research outputs found

    Word contexts enhance the neural representation of individual letters in early visual cortex

    No full text
    Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions

    Neural codes for one’s own position and direction in a real-world “vista” environment

    Get PDF
    Humans, like animals, rely on an accurate knowledge of one’s spatial position and facing direction to keep orientated in the surrounding space. Although previous neuroimaging studies demonstrated that scene-selective regions (the parahippocampal place area or PPA, the occipital place area or OPA and the retrosplenial complex or RSC), and the hippocampus (HC) are implicated in coding position and facing direction within small-(room-sized) and large-scale navigational environments, little is known about how these regions represent these spatial quantities in a large open-field environment. Here, we used functional magnetic resonance imaging (fMRI) in humans to explore the neural codes of these navigationally-relevant information while participants viewed images which varied for position and facing direction within a familiar, real-world circular square. We observed neural adaptation for repeated directions in the HC, even if no navigational task was required. Further, we found that the amount of knowledge of the environment interacts with the PPA selectivity in encoding positions: individuals who needed more time to memorize positions in the square during a preliminary training task showed less neural attenuation in this scene-selective region. We also observed adaptation effects, which reflect the real distances between consecutive positions, in scene-selective regions but not in the HC. When examining the multi-voxel patterns of activity we observed that scene-responsive regions and the HC encoded both spatial information and that the RSC classification accuracy for positions was higher in individuals scoring higher to a self-reported questionnaire of spatial abilities. Our findings provide new insight into how the human brain represents a real, large-scale “vista” space, demonstrating the presence of neural codes for position and direction in both scene-selective and hippocampal regions, and revealing the existence, in the former regions, of a map-like spatial representation reflecting real-world distance between consecutive positions

    Predictive learning, prediction errors, and attention: evidence from event-related potentials and eye tracking

    Get PDF
    Prediction error (‘‘surprise’’) affects the rate of learning: We learn more rapidly about cues for which we initially make incorrect predictions than cues for which our initial predictions are correct. The current studies employ electrophysiological measures to reveal early attentional differentiation of events that differ in their previous involvement in errors of predictive judgment. Error-related events attract more attention, as evidenced by features of event-related scalp potentials previously implicated in selective visual attention (selection negativity, augmented anterior N1). The earliest differences detected occurred around 120 msec after stimulus onset, and distributed source localization (LORETA) indicated that the inferior temporal regions were one source of the earliest differences. In addition, stimuli associated with the production of prediction errors show higher dwell times in an eyetracking procedure. Our data support the view that early attentional processes play a role in human associative learning

    Enhancing the performance of the fuzzy system approach to prediction

    Full text link

    Electroencephalographic field influence on calcium momentum waves

    Get PDF
    Macroscopic EEG fields can be an explicit top-down neocortical mechanism that directly drives bottom-up processes that describe memory, attention, and other neuronal processes. The top-down mechanism considered are macrocolumnar EEG firings in neocortex, as described by a statistical mechanics of neocortical interactions (SMNI), developed as a magnetic vector potential A\mathbf{A}. The bottom-up process considered are Ca2+\mathrm{Ca}^{2+} waves prominent in synaptic and extracellular processes that are considered to greatly influence neuronal firings. Here, the complimentary effects are considered, i.e., the influence of A\mathbf{A} on Ca2+\mathrm{Ca}^{2+} momentum, p\mathbf{p}. The canonical momentum of a charged particle in an electromagnetic field, Π=p+qA\mathbf{\Pi} = \mathbf{p} + q \mathbf{A} (SI units), is calculated, where the charge of Ca2+\mathrm{Ca}^{2+} is q=2eq = - 2 e, ee is the magnitude of the charge of an electron. Calculations demonstrate that macroscopic EEG A\mathbf{A} can be quite influential on the momentum p\mathbf{p} of Ca2+\mathrm{Ca}^{2+} ions, in both classical and quantum mechanics. Molecular scales of Ca2+\mathrm{Ca}^{2+} wave dynamics are coupled with A\mathbf{A} fields developed at macroscopic regional scales measured by coherent neuronal firing activity measured by scalp EEG. The project has three main aspects: fitting A\mathbf{A} models to EEG data as reported here, building tripartite models to develop A\mathbf{A} models, and studying long coherence times of Ca2+\mathrm{Ca}^{2+} waves in the presence of A\mathbf{A} due to coherent neuronal firings measured by scalp EEG. The SMNI model supports a mechanism wherein the p+qA\mathbf{p} + q \mathbf{A} interaction at tripartite synapses, via a dynamic centering mechanism (DCM) to control background synaptic activity, acts to maintain short-term memory (STM) during states of selective attention.Comment: Final draft. http://ingber.com/smni14_eeg_ca.pdf may be updated more frequentl

    EEG theta and Mu oscillations during perception of human and robot actions.

    Get PDF
    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other
    corecore