1,584 research outputs found

    Combining Edge and One-Point RANSAC Algorithm to Estimate Visual Odometry

    Get PDF
    In recent years, classical structure from motion based SLAM has achieved significant results. Omnidirectional camera-based motion estimation has become interested researchers due to the lager field of view. This paper proposes a method to estimate the 2D motion of a vehicle and mapping by using EKF based on edge matching and one point RANSAC. Edge matching based azimuth rotation estimation is used as pseudo prior information for EKF predicting state vector. In order to reduce requirement parameters for motion estimation and reconstruction, the vehicle moves under nonholonomic constraints car-like structured motion model assumption. The experiments were carried out using an electric vehicle with an omnidirectional camera mounted on the roof. In order to evaluate the motion estimation, the vehicle positions were compared with GPS information and superimposed onto aerial images collected by Google map API. The experimental results showed that the method based on EKF without using prior rotation information given error is about 1.9 times larger than our proposed method.In recent years, classical structure from motion based SLAM has achieved significant results. Omnidirectional camera-based motion estimation has become interested researchers due to the lager field of view. This paper proposes a method to estimate the 2D motion of a vehicle and mapping by using EKF based on edge matching and one point RANSAC. Edge matching based azimuth rotation estimation is used as pseudo prior information for EKF predicting state vector. In order to reduce requirement parameters for motion estimation and reconstruction, the vehicle moves under nonholonomic constraints car-like structured motion model assumption. The experiments were carried out using an electric vehicle with an omnidirectional camera mounted on the roof. In order to evaluate the motion estimation, the vehicle positions were compared with GPS information and superimposed onto aerial images collected by Google map API. The experimental results showed that the method based on EKF without using prior rotation information given error is about 1.9 times larger than our proposed method

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    An adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment

    Under vehicle perception for high level safety measures using a catadioptric camera system

    Get PDF
    In recent years, under vehicle surveillance and the classification of the vehicles become an indispensable task that must be achieved for security measures in certain areas such as shopping centers, government buildings, army camps etc. The main challenge to achieve this task is to monitor the under frames of the means of transportations. In this paper, we present a novel solution to achieve this aim. Our solution consists of three main parts: monitoring, detection and classification. In the first part we design a new catadioptric camera system in which the perspective camera points downwards to the catadioptric mirror mounted to the body of a mobile robot. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part we use speeded up robust features (SURF) in an object recognition algorithm. Fast appearance based mapping algorithm (FAB-MAP) is exploited for the classification of the means of transportations in the third part. Proposed technique is implemented in a laboratory environment

    Information-based view initialization in visual SLAM with a single omnidirectional camera

    Full text link
    © 2015 Elsevier B.V. All rights reserved. This paper presents a novel mechanism to initiate new views within the map building process for an EKF-based visual SLAM (Simultaneous Localization and Mapping) approach using omnidirectional images. In presence of non-linearities, the EKF is very likely to compromise the final estimation. Particularly, the omnidirectional observation model induces non-linear errors, thus it becomes a potential source of uncertainty. To deal with this issue we propose a novel mechanism for view initialization which accounts for information gain and losses more efficiently. The main outcome of this contribution is the reduction of the map uncertainty and thus the higher consistency of the final estimation. Its basis relies on a Gaussian Process to infer an information distribution model from sensor data. This model represents feature points existence probabilities and their information content analysis leads to the proposed view initialization scheme. To demonstrate the suitability and effectiveness of the approach we present a series of real data experiments conducted with a robot equipped with a camera sensor and map model solely based on omnidirectional views. The results reveal a beneficial reduction on the uncertainty but also on the error in the pose and the map estimate
    corecore