46 research outputs found

    Fronto-parietal brain responses to visuotactile congruence in an anatomical reference frame

    Get PDF
    Spatially and temporally congruent visuotactile stimulation of a fake hand together with one’s real hand may result in an illusory self-attribution of the fake hand. Although this illusion relies on a representation of the two touched body parts in external space, there is tentative evidence that, for the illusion to occur, the seen and felt touches also need to be congruent in an anatomical reference frame. We used functional magnetic resonance imaging and a somatotopical, virtual reality-based setup to isolate the neuronal basis of such a comparison. Participants’ index or little finger was synchronously touched with the index or little finger of a virtual hand, under congruent or incongruent orientations of the real and virtual hands. The left ventral premotor cortex responded significantly more strongly to visuotactile co- stimulation of the same versus different fingers of the virtual and real hand. Conversely, the left anterior intraparietal sulcus responded significantly more strongly to co-stimulation of different versus same fingers. Both responses were independent of hand orientation congruence and of spatial congruence of the visuotactile stimuli. Our results suggest that fronto- parietal areas previously associated with multisensory processing within peripersonal space and with tactile remapping evaluate the congruence of visuotactile stimulation on the body according to an anatomical reference frame

    On the Origin of the Functional Architecture of the Cortex

    Get PDF
    The basic structure of receptive fields and functional maps in primary visual cortex is established without exposure to normal sensory experience and before the onset of the critical period. How the brain wires these circuits in the early stages of development remains unknown. Possible explanations include activity-dependent mechanisms driven by spontaneous activity in the retina and thalamus, and molecular guidance orchestrating thalamo-cortical connections on a fine spatial scale. Here I propose an alternative hypothesis: the blueprint for receptive fields, feature maps, and their inter-relationships may reside in the layout of the retinal ganglion cell mosaics along with a simple statistical connectivity scheme dictating the wiring between thalamus and cortex. The model is shown to account for a number of experimental findings, including the relationship between retinotopy, orientation maps, spatial frequency maps and cytochrome oxidase patches. The theory's simplicity, explanatory and predictive power makes it a serious candidate for the origin of the functional architecture of primary visual cortex

    Modeling Bottom-Up and Top-Down Attention with a Neurodynamic Model of V1

    Get PDF
    Previous studies in that line suggested that lateral interactions of V1 cells are responsible, among other visual effects, of bottom-up visual attention (alternatively named visual salience or saliency). Our objective is to mimic these connections in the visual system with a neurodynamic network of firing-rate neurons. Early subcortical processes (i.e. retinal and thalamic) are functionally simulated. An implementation of the cortical magnification function is included to define the retinotopical projections towards V1, processing neuronal activity for each distinct view during scene observation. Novel computational definitions of top-down inhibition (in terms of inhibition of return and selection mechanisms), are also proposed to predict attention in Free-Viewing and Visual Search conditions. Results show that our model outpeforms other biologically-inpired models of saliency prediction as well as to predict visual saccade sequences during free viewing. We also show how temporal and spatial characteristics of inhibition of return can improve prediction of saccades, as well as how distinct search strategies (in terms of feature-selective or category-specific inhibition) predict attention at distinct image contexts.Comment: 32 pages, 19 figure

    How far neuroscience is from understanding brains

    Get PDF
    The cellular biology of brains is relatively well-understood, but neuroscientists have not yet generated a theory explaining how brains work. Explanations of how neurons collectively operate to produce what brains can do are tentative and incomplete. Without prior assumptions about the brain mechanisms, I attempt here to identify major obstacles to progress in neuroscientific understanding of brains and central nervous systems. Most of the obstacles to our understanding are conceptual. Neuroscience lacks concepts and models rooted in experimental results explaining how neurons interact at all scales. The cerebral cortex is thought to control awake activities, which contrasts with recent experimental results. There is ambiguity distinguishing task-related brain activities from spontaneous activities and organized intrinsic activities. Brains are regarded as driven by external and internal stimuli in contrast to their considerable autonomy. Experimental results are explained by sensory inputs, behavior, and psychological concepts. Time and space are regarded as mutually independent variables for spiking, post-synaptic events, and other measured variables, in contrast to experimental results. Dynamical systems theory and models describing evolution of variables with time as the independent variable are insufficient to account for central nervous system activities. Spatial dynamics may be a practical solution. The general hypothesis that measurements of changes in fundamental brain variables, action potentials, transmitter releases, post-synaptic transmembrane currents, etc., propagating in central nervous systems reveal how they work, carries no additional assumptions. Combinations of current techniques could reveal many aspects of spatial dynamics of spiking, post-synaptic processing, and plasticity in insects and rodents to start with. But problems defining baseline and reference conditions hinder interpretations of the results. Furthermore, the facts that pooling and averaging of data destroy their underlying dynamics imply that single-trial designs and statistics are necessary

    The tectum/superior colliculus as the vertebrate solution for spatial sensory integration and action

    Get PDF
    The superior colliculus, or tectum in the case of non-mammalian vertebrates, is a part of the brain that registers events in the surrounding space, often through vision and hearing, but also through electrosensation, infrared detection, and other sensory modalities in diverse vertebrate lineages. This information is used to form maps of the surrounding space and the positions of different salient stimuli in relation to the individual. The sensory maps are arranged in layers with visual input in the uppermost layer, other senses in deeper positions, and a spatially aligned motor map in the deepest layer. Here, we will review the organization and intrinsic function of the tectum/superior colliculus and the information that is processed within tectal circuits. We will also discuss tectal/superior colliculus outputs that are conveyed directly to downstream motor circuits or via the thalamus to cortical areas to control various aspects of behavior. The tectum/superior colliculus is evolutionarily conserved among all vertebrates, but tailored to the sensory specialties of each lineage, and its roles have shifted with the emergence of the cerebral cortex in mammals. We will illustrate both the conserved and divergent properties of the tectum/superior colliculus through vertebrate evolution by comparing tectal processing in lampreys belonging to the oldest group of extant vertebrates, larval zebrafish, rodents, and other vertebrates including primates

    The haptic perception of spatial orientations

    Get PDF
    This review examines the isotropy of the perception of spatial orientations in the haptic system. It shows the existence of an oblique effect (i.e., a better perception of vertical and horizontal orientations than oblique orientations) in a spatial plane intrinsic to the haptic system, determined by the gravitational cues and the cognitive resources and defined in a subjective frame of reference. Similar results are observed from infancy to adulthood. In 3D space, the haptic processing of orientations is also anisotropic and seems to use both egocentric and allocentric cues. Taken together, these results revealed that the haptic oblique effect occurs when the sensory motor traces associated with exploratory movement are represented more abstractly at a cognitive level

    Unified developmental model of maps, complex cells and surround modulation in the primary visual cortex

    Get PDF
    For human and animal vision, the perception of local visual features can depend on the spatial arrangement of the surrounding visual stimuli. In the earliest stages of visual processing this phenomenon is called surround modulation, where the response of visually selective neurons is influenced by the response of neighboring neurons. Surround modulation has been implicated in numerous important perceptual phenomena, such as contour integration and figure-ground segregation. In cats, one of the major potential neural substrates for surround modulation are lateral connections between cortical neurons in layer 2/3, which typically contains ”complex” cells that appear to combine responses from ”simple” cells in layer 4C. Interestingly, these lateral connections have also been implicated in the development of functional maps in primary visual cortex, such as smooth, well-organized maps for the preference of oriented lines. Together, this evidence suggests a common underlying substrate the lateral interactions in layer 2/3—as the driving force behind development of orientation maps for both simple and complex cells, and at the same time expression of surround modulation in adult animals. However, previously these phenomena have been studied largely in isolation, and we are not aware of a computational model that can account for all of them simultaneously and show how they are related. In this thesis we resolve this problem by building a single, unified computational model that can explain the development of orientation maps, the development of simple and complex cells, and surround modulation. First we build a simple, single-layer model of orientation map development based on ALISSOM, which has more realistic single cell properties (such as contrast gain control and contrast invariant orientation tuning) than its predecessor. Then we extend this model by adding layer 2/3, and show how the model can explain development of orientation maps of both simple and complex cells. As the last step towards a developmental model of surround modulation, we replace Mexican-hat-like lateral connectivity in layer 2/3 of the model with a more realistic configuration based on long-range excitation and short-range inhibitory cells, extending a simpler model by Judith Law. The resulting unified model of V1 explains how orientation maps of simple and complex cells can develop, while individual neurons in the developed model express realistic orientation tuning and various surround modulation properties. In doing so, we not only offer a consistent explanation behind all these phenomena, but also create a very rich model of V1 in which the interactions between various V1 properties can be studied. The model allows us to formulate several novel predictions that relate the variation of single cell properties to their location in the orientation preference maps in V1, and we show how these predictions can be tested experimentally. Overall, this model represents a synthesis of a wide body of experimental evidence, forming a compact hypothesis for much of the development and behavior of neurons in the visual cortex
    corecore