19 research outputs found

    Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells

    Get PDF
    We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA laye

    Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells

    Get PDF
    We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system [1]. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer

    From Grids to Places

    Get PDF
    Hafting et al. (2005) described grid cells in the dorsocaudal region of the medial enthorinal cortex (dMEC). These cells show a strikingly regular grid-like firing-pattern as a function of the position of a rat in an enclosure. Since the dMEC projects to the hippocampal areas containing the well-known place cells, the question arises whether and how the localized responses of the latter can emerge based on the output of grid cells. Here, we show that, starting with simulated grid-cells, a simple linear transformation maximizing sparseness leads to a localized representation similar to place fields

    Identification of High-level Object Manipulation Operations from Multimodal Input

    Get PDF
    Barchunova A, Franzius M, Pardowitz M, Ritter H. Identification of High-level Object Manipulation Operations from Multimodal Input. Presented at the IASTED International Conferences on Automation, Control, and Information Technology

    Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis.

    Full text link
    We present a biologically motivated model for visual self-localization which extracts a spatial representation of the environment directly from high dimensional image data by employing a single unsupervised learning rule. The resulting representation encodes the position of the camera as slowly varying features while being invariant to its orientation resembling place cells in a rodent's hippocampus. Using an omnidirectional mirror allows to manipulate the image statistics by adding simulated rotational movement for improved orientation invariance. We apply the model in indoor and outdoor experiments and, for the first time, compare its performance against two state of the art visual SLAM methods. Results of the experiments show that the proposed straightforward model enables a precise self-localization with accuracies in the range of 13-33cm demonstrating its competitiveness to the established SLAM methods in the tested scenarios

    Multimodal Segmentation of Object Manipulation Sequences with Product Models

    Full text link
    Barchunova A, Haschke R, Franzius M, Ritter H. Multimodal Segmentation of Object Manipulation Sequences with Product Models. Presented at the International Conference on Multimodal Interaction, Alicante

    Reinforcement learning on complex visual stimuli

    Full text link

    Slowness and sparseness for unsupervised learning of spatial and object codes from naturalistic data

    Get PDF
    Diese Doktorarbeit fĂŒhrt ein hierarchisches Modell fĂŒr das unĂŒberwachte Lernen aus quasi-natĂŒrlichen Videosequenzen ein. Das Modell basiert auf den Lernprinzipien der Langsamkeit und SpĂ€rlichkeit, fĂŒr die verschiedene AnsĂ€tze und Implementierungen vorgestellt werden. Eine Vielzahl von Neuronentypen im Hippocampus von Nagern und Primaten kodiert verschiedene Aspekte der rĂ€umlichen Umgebung eines Tieres. Dazu gehören Ortszellen (place cells), Kopfrichtungszellen (head direction cells), Raumansichtszellen (spatial view cells) und Gitterzellen (grid cells). Die Hauptergebnisse dieser Arbeit basieren auf dem Training des hierarchischen Modells mit Videosequenzen aus einer Virtual-Reality-Umgebung. Das Modell reproduziert die wichtigsten rĂ€umlichen Codes aus dem Hippocampus. Die Art der erzeugten ReprĂ€sentationen hĂ€ngt hauptsĂ€chlich von der Bewegungsstatistik des simulierten Tieres ab. Das vorgestellte Modell wird außerdem auf das Problem der invaranten Objekterkennung angewandt, indem Videosequenzen von simulierten Kugelhaufen oder Fischen als Stimuli genutzt wurden. Die resultierenden ModellreprĂ€sentationen erlauben das unabhĂ€ngige Auslesen von ObjektidentitĂ€t, Position und Rotationswinkel im Raum.This thesis introduces a hierarchical model for unsupervised learning from naturalistic video sequences. The model is based on the principles of slowness and sparseness. Different approaches and implementations for these principles are discussed. A variety of neuron classes in the hippocampal formation of rodents and primates codes for different aspects of space surrounding the animal, including place cells, head direction cells, spatial view cells and grid cells. In the main part of this thesis, video sequences from a virtual reality environment are used for training the hierarchical model. The behavior of most known hippocampal neuron types coding for space are reproduced by this model. The type of representations generated by the model is mostly determined by the movement statistics of the simulated animal. The model approach is not limited to spatial coding. An application of the model to invariant object recognition is described, where artificial clusters of spheres or rendered fish are presented to the model. The resulting representations allow a simple extraction of the identity of the object presented as well as of its position and viewing angle
    corecore