1,525 research outputs found

    Omnidirectional Vision Based Topological Navigation

    Get PDF
    Goedemé T., Van Gool L., ''Omnidirectional vision based topological navigation'', Mobile robots navigation, pp. 172-196, Barrera Alejandra, ed., March 2010, InTech.status: publishe

    Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

    Full text link
    An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed. In previous paper, such algorithm for regular camera was considered. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. In this paper, these constraints are extended to handle non-central projection, as is the case with many omnidirectional systems. The utilization of omnidirectional data is shown to improve the robustness and accuracy of the navigation algorithm. The feasibility of this algorithm is established through lab experimentation with two kinds of omnidirectional acquisition systems. The first one is polydioptric cameras while the second is catadioptric camera.Comment: 6 pages, 9 figure

    Face tracking using a hyperbolic catadioptric omnidirectional system

    Get PDF
    In the first part of this paper, we present a brief review on catadioptric omnidirectional systems. The special case of the hyperbolic omnidirectional system is analysed in depth. The literature shows that a hyperboloidal mirror has two clear advantages over alternative geometries. Firstly, a hyperboloidal mirror has a single projection centre [1]. Secondly, the image resolution is uniformly distributed along the mirror’s radius [2]. In the second part of this paper we show empirical results for the detection and tracking of faces from the omnidirectional images using Viola-Jones method. Both panoramic and perspective projections, extracted from the omnidirectional image, were used for that purpose. The omnidirectional image size was 480x480 pixels, in greyscale. The tracking method used regions of interest (ROIs) set as the result of the detections of faces from a panoramic projection of the image. In order to avoid losing or duplicating detections, the panoramic projection was extended horizontally. Duplications were eliminated based on the ROIs established by previous detections. After a confirmed detection, faces were tracked from perspective projections (which are called virtual cameras), each one associated with a particular face. The zoom, pan and tilt of each virtual camera was determined by the ROIs previously computed on the panoramic image. The results show that, when using a careful combination of the two projections, good frame rates can be achieved in the task of tracking faces reliably

    Incremental topological mapping using omnidirectional vision

    Get PDF
    This paper presents an algorithm that builds topological maps, using omnidirectional vision as the only sensor modality. Local features are extracted from images obtained in sequence, and are used both to cluster the images into nodes and to detect links between the nodes. The algorithm is incremental, reducing the computational requirements of the corresponding batch algorithm. Experimental results in a complex, indoor environment show that the algorithm produces topologically correct maps, closing loops without suffering from perceptual aliasing or false links. Robustness to lighting variations was further demonstrated by building correct maps from combined multiple datasets collected over a period of 2 month

    Affordable robot mapping using omnidirectional vision

    Get PDF
    © 2021 EPSRC UK-Robotics and Autonomous Systems (UK-RAS) Network. This is an open access conference paper distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Mapping is a fundamental requirement for robot navigation.In this paper, we introduce a novel visual mapping method that relies solely on a single omnidirectional camera.We present a metric that allows us to generate a map from the input image by using a visual Sonar approach.The combination of the visual sonars with the robot's odometry enables us to determine a relation equation and subsequently generate a map that is suitable for robot navigation.Results based on visual map comparison indicate that our approach is comparable with the established solutions based on RGB-D cameras or laser-based sensors. We now embark on evaluating our accuracy against the established methods

    Visual servoing of a car-like vehicle - an application of omnidirectional vision

    Get PDF
    In this paper, we develop the switching controller presented by Lee et al. for the pose control of a car-like vehicle, to allow the use of an omnidirectional vision sensor. To this end we incorporate an extension to a hypothesis on the navigation behaviour of the desert ant, cataglyphis bicolor, which leads to a correspondence free landmark based vision technique. The method we present allows positioning to a learnt location based on feature bearing angle and range discrepancies between the robot's current view of the environment, and that at a learnt location. We present simulations and experimental results, the latter obtained using our outdoor mobile platform

    Adaptative Markov Random Fields for Omnidirectional Vision

    Get PDF
    International audienceImages obtained with catadioptric sensors contain significant deformations which prevent the direct use of classical image treatments. Thus, Markov Random Fields (MRF) whose usefulness is now obvious for projective image processing , can not be used directly on catadioptric images because of the inadequacy of the neighborhood. In this paper, we propose to define a new neighborhood for MRF by using the equivalence theorem developed for central catadioptric sensors. We show the importance of this adaptation for a motion detection application
    • …
    corecore