4 research outputs found

    Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis.

    No full text
    We present a biologically motivated model for visual self-localization which extracts a spatial representation of the environment directly from high dimensional image data by employing a single unsupervised learning rule. The resulting representation encodes the position of the camera as slowly varying features while being invariant to its orientation resembling place cells in a rodent's hippocampus. Using an omnidirectional mirror allows to manipulate the image statistics by adding simulated rotational movement for improved orientation invariance. We apply the model in indoor and outdoor experiments and, for the first time, compare its performance against two state of the art visual SLAM methods. Results of the experiments show that the proposed straightforward model enables a precise self-localization with accuracies in the range of 13-33cm demonstrating its competitiveness to the established SLAM methods in the tested scenarios

    Data sets for the real world experiments in the PLOS ONE article "Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis"

    No full text
    Data sets for the real world experiments in the PLOS ONE article: "Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis"<div><br></div><div>Omnidirectional and perspective images and corresponding ground truth coordinates.</div><div><br></div><div><div>The parts of the zip-archive can be joined to a single file by executing the following command:</div><div><br></div><div><div># Linux</div><div>cat real_world_experiments_part.zip* > real_world_experiments.zip</div></div><div><br></div><div><div># Windows</div><div>copy /b real_world_experiments_part.zip.00+real_world_experiments_part.zip.01+real_world_experiments_part.zip.02+real_world_experiments_part.zip.03 real_world_experiments.zip</div></div></div

    Data set for the simulator experiment in the PLOS ONE article "Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis"

    No full text
    Data set for the simulator experiment in the PLOS ONE article:<br>"Bio-inspired visual self-localization in real world scenarios<br>using Slow Feature Analysis"<br><br>Images 'panorama_0.png' - 'panorama_628.png' are panoramic images rendered on<br>an equidistant grid in a simulator environment.<br><br>Sequences for the training- and test-set were created artificially by<br>sampling successive image/coordinate pairs from the grid.<br><br>The files 'train_sequence.csv' and 'test_sequence.csv' contain the image file<br>names and corresponding coordinates for the respective sets
    corecore