Skip to main content
Article thumbnail
Location of Repository

Self-organizing visual maps

By Robert Sim and Gregory Dudek

Abstract

This paper deals with automatically learning the spatial distribution of a set of images. That is, given a sequence of images acquired from well-separated locations, how can they be arranged to best explain their genesis? The solution to this problem can be viewed as an instance of robot mapping although it can also be used in other contexts. We examine the problem where only limited prior odometric information is available, employing a feature-based method derived from a probabilistic pose estimation framework. Initially, a set of visual features is selected from the images and correspondences are found across the ensemble. The images are then localized by first assembling the small subset of images for which odometric confidence is high, and sequentially inserting the remaining images, localizing each against the previous estimates, and taking advantage of any priors that are available. We present experimental results validating the approach, and demonstrating metrically and topologically accurate results over two large image ensembles. Finally, we discuss the results, their relationship to the autonomous exploration of an unknown environment, and their utility for robot localization and navigation

Year: 2004
OAI identifier: oai:CiteSeerX.psu:10.1.1.135.931
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cs.ubc.ca/~simra/pu... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.