22,012 research outputs found

    Edge Potential Functions (EPF) and Genetic Algorithms (GA) for Edge-Based Matching of Visual Objects

    Get PDF
    Edges are known to be a semantically rich representation of the contents of a digital image. Nevertheless, their use in practical applications is sometimes limited by computation and complexity constraints. In this paper, a new approach is presented that addresses the problem of matching visual objects in digital images by combining the concept of Edge Potential Functions (EPF) with a powerful matching tool based on Genetic Algorithms (GA). EPFs can be easily calculated starting from an edge map and provide a kind of attractive pattern for a matching contour, which is conveniently exploited by GAs. Several tests were performed in the framework of different image matching applications. The results achieved clearly outline the potential of the proposed method as compared to state of the art methodologies. (c) 2007 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works

    Confidence Estimation in Image-Based Localization

    Get PDF
    Image-based localization aims at estimating the camera position and orientation, briefly referred as camera pose, from a given image. Estimating the camera pose is needed in several applications, such as augmented reality, odometry and self-driving cars. A main challenge is to develop an algorithm for large varying environments, such as buildings or whole cities. During the past decade several algorithms have tackled this challenge and, despite the promising results, the task is far from being solved. Several applications, however, need a reliable pose estimate; in odometry applications, for example, the camera pose is used to correct the drift error accumulated by inertial sensor measurements. Based on this, it is important to be able to assess the confidence of the estimated pose and manage to discriminate between correct and incorrect poses within a prefixed error threshold. A common approach is to use the number of inliers produced in the RANSAC loop to evaluate how good an estimate is. Particularly, this is used to choose the best pose from a given image from a set of candidates. This metric, however, is not very robust, especially for indoor scenes, presenting several repetitive patterns, such as long textureless walls or similar objects. Despite some other metrics have been proposed, they aim at improving the accuracy of the algorithm, by grading candidate poses referred to the same query image; they thus recognize the best pose among a given set but cannot be used to grade the overall confidence of the final pose. In this thesis, we formalize confidence estimation as a binary classification problem and investigate how to quantify the confidence of an estimated camera pose. Opposed to the previous work, this new research question takes place after the whole visual localization pipeline and is able to compare also poses from different query images. In addition to the number of inliers, other factors such as the spatial distributions of inliers, are considered. A neural network is then used to generate a novel robust metric, able to evaluate the confidence for different query images. The proposed method is benchmarked using InLoc, a challenging dataset for indoor pose estimation. It is also shown the proposed confidence metric is independent of the dataset used for training and can be applied to different datasets and pipelines
    • …
    corecore