3 research outputs found

    Grounding Mental Representations in a Virtual Multi-Level Functional Framework

    Get PDF
    According to the associative theory of learning, reactive behaviors described by stimulus-response pairs result in the progressive wiring of a plastic brain. In contrast, flexible behaviors are supposedly driven by neurologically grounded mental states that involve computations on informational contents. These theories appear complementary, but are generally opposed to each other. The former is favored by neuro-scientists who explore the low-level biological processes supporting cognition, and the later by cognitive psychologists who look for higher-level structures. This situation can be clarified through an analysis that independently defines abstract neurological and informational functionalities, and then relate them through a virtual interface. This framework is validated through a modeling of the first stage of Piaget’s cognitive development theory, whose reported end experiments demonstrate the emergence of mental representations of object displacements. The neural correlates grounding this emergence are given in the isomorphic format of an associative memory. As a child’s exploration of the world progresses, his mental models will eventually include representations of space, time and causality. Only then epistemological concepts, such as beliefs, will give rise to higher level mental representations in a possibly richer propositional format. This raises the question of which additional neurological functionalities, if any, would be required in order to include these extensions into a comprehensive grounded model. We relay previously expressed views, which in summary hypothesize that the ability to learn has evolved from associative reflexes and memories, to suggest that the functionality of associative memories could well provide the sufficient means for grounding cognitive capacities

    A computational model for grid maps in neural populations.

    No full text
    Grid cells in the entorhinal cortex, together with head direction, place, speed and border cells, are major contributors to the organization of spatial representations in the brain. In this work we introduce a novel theoretical and algorithmic framework able to explain the optimality of hexagonal grid-like response patterns. We show that this pattern is a result of minimal variance encoding of neurons together with maximal robustness to neurons' noise and minimal number of encoding neurons. The novelty lies in the formulation of the encoding problem considering neurons as an overcomplete basis (a frame) where the position information is encoded. Through the modern Frame Theory language, specifically that of tight and equiangular frames, we provide new insights about the optimality of hexagonal grid receptive fields. The proposed model is based on the well-accepted and tested hypothesis of Hebbian learning, providing a simplified cortical-based framework that does not require the presence of velocity-driven oscillations (oscillatory model) or translational symmetries in the synaptic connections (attractor model). We moreover demonstrate that the proposed encoding mechanism naturally explains axis alignment of neighbor grid cells and maps shifts, rotations and scaling of the stimuli onto the shape of grid cells' receptive fields, giving a straightforward explanation of the experimental evidence of grid cells remapping under transformations of environmental cues
    corecore