2,966 research outputs found

    Parallel and convergent processing in grid cell, head-direction cell, boundary cell, and place cell networks.

    Get PDF
    The brain is able to construct internal representations that correspond to external spatial coordinates. Such brain maps of the external spatial topography may support a number of cognitive functions, including navigation and memory. The neuronal building block of brain maps are place cells, which are found throughout the hippocampus of rodents and, in a lower proportion, primates. Place cells typically fire in one or few restricted areas of space, and each area where a cell fires can range, along the dorsoventral axis of the hippocampus, from 30 cm to at least several meters. The sensory processing streams that give rise to hippocampal place cells are not fully understood, but substantial progress has been made in characterizing the entorhinal cortex, which is the gateway between neocortical areas and the hippocampus. Entorhinal neurons have diverse spatial firing characteristics, and the different entorhinal cell types converge in the hippocampus to give rise to a single, spatially modulated cell type-the place cell. We therefore suggest that parallel information processing in different classes of cells-as is typically observed at lower levels of sensory processing-continues up into higher level association cortices, including those that provide the inputs to hippocampus. WIREs Cogn Sci 2014, 5:207-219. doi: 10.1002/wcs.1272 Conflict of interest: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website

    Visual pathways from the perspective of cost functions and multi-task deep neural networks

    Get PDF
    Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units.Comment: 16 pages, 5 figure

    The Influence of the Dorsal Pathway on Enhanced Visual Processing

    Get PDF
    Overall our visual experience is such a seamless one that unless specifically told, we might never know that what we see is actually the visual system taking the very simple input provided by cells in the retina and constructing an image based on rules and calculations and algorithms neuroscientists have yet to fully uncover. This is an incredible feat given the plethora of visual stimuli within our environment, that this information is used to inform and plan actions, and if that wasnt enough, the visual system also has the capacity to selectively enhance certain aspects of visual processing if needs be. The research contained within this dissertation seeks to investigate how the dorsal visual pathway enhances both decision-making processes and visual stimuli presented near the hand. Our findings suggest that the formation of object representations in the dorsal pathway can include both ventral (colour, contrast) and dorsal (speed) stream features (chapters two and three), which in turn greatly speed decision-making processes within the dorsal pathway. In addition, contrast and speed are integrated automatically but purely ventral stream features, such as colour, require top-down attention to facilitate enhanced processing speeds (chapter three). In chapter four we find that visual processing near the hand is enhanced in a novel way. When the hand is nearby, orientation tuning is sharpened in a manner not consistent with either oculomotor-driven spatial or feature based attention. In addition, response variability is reduced when the hand is nearby, raising the possibility that enhanced processing near the hand maybe be driven by feedback from frontoparietal reaching and grasping regions. The research within this dissertation includes important new information regarding how the dorsal pathway can speed visual processing, and provides insight as to the next stage in understanding how we use vision for action

    Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives

    Get PDF
    Copyright ©2014 Zhong, Cangelosi and Wermter.This is an open-access article distributed under the terms of the Creative Commons Attribution License (CCBY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these termsThe acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.Peer reviewedFinal Published versio

    Contributions of cortical feedback to sensory processing in primary visual cortex

    Get PDF
    Closing the structure-function divide is more challenging in the brain than in any other organ (Lichtman and Denk, 2011). For example, in early visual cortex, feedback projections to V1 can be quantified (e.g., Budd, 1998) but the understanding of feedback function is comparatively rudimentary (Muckli and Petro, 2013). Focusing on the function of feedback, we discuss how textbook descriptions mask the complexity of V1 responses, and how feedback and local activity reflects not only sensory processing but internal brain states

    Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision

    Get PDF
    To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision
    • …
    corecore