20,797 research outputs found

    Overviews of Optimization Techniques for Geometric Estimation

    Get PDF
    We summarize techniques for optimal geometric estimation from noisy observations for computer vision applications. We first discuss the interpretation of optimality and point out that geometric estimation is different from the standard statistical estimation. We also describe our noise modeling and a theoretical accuracy limit called the KCR lower bound. Then, we formulate estimation techniques based on minimization of a given cost function: least squares (LS), maximum likelihood (ML), which includes reprojection error minimization as a special case, and Sampson error minimization. We describe bundle adjustment and the FNS scheme for numerically solving them and the hyperaccurate correction that improves the accuracy of ML. Next, we formulate estimation techniques not based on minimization of any cost function: iterative reweight, renormalization, and hyper-renormalization. Finally, we show numerical examples to demonstrate that hyper-renormalization has higher accuracy than ML, which has widely been regarded as the most accurate method of all. We conclude that hyper-renormalization is robust to noise and currently is the best method

    A new approach to robust fundamental matrix estimation using an analytic objective function and adjusted gradient projection

    Get PDF
    In this paper we propose a new approach to tackling the challenging problem of robust fundamental matrix estimation from corrupted correspondences. Compared with traditional robust methods, the proposed approach achieves enhanced estimation accuracy and stability. These achievements are attributed mainly to two novelties contributed by the new approach. Firstly, a new, more easily-solvable analytic objective function is proposed to well consider both the presence of correspondence outliers and the computational convenience. Secondly, an adjusted gradient projection method is developed to provide a more stable solver for robust estimation. Experimental results show that the proposed approach performs better than traditional robust methods RANSAC, MSAC, LMEDS and MLESAC, in particular when correspondences were seriously corrupted

    A statistical rationalisation of Hartley's normalised eight-point algorithm

    Get PDF
    Ā©2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.The eight-point algorithm of Hartley occupies an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eight-point algorithm that results from using normalised data. A first step is singling out a cost function that the normalised algorithm acts to minimise. The cost function is then shown to be statistically better founded than the cost function associated with the non-normalised algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework.Wojciech Chojnacki, Michael J. Brooks, Anton van den Hengel, Darren Gawle

    Stable Camera Motion Estimation Using Convex Programming

    Full text link
    We study the inverse problem of estimating n locations t1,...,tnt_1, ..., t_n (up to global scale, translation and negation) in RdR^d from noisy measurements of a subset of the (unsigned) pairwise lines that connect them, that is, from noisy measurements of Ā±(tiāˆ’tj)/āˆ„tiāˆ’tjāˆ„\pm (t_i - t_j)/\|t_i - t_j\| for some pairs (i,j) (where the signs are unknown). This problem is at the core of the structure from motion (SfM) problem in computer vision, where the tit_i's represent camera locations in R3R^3. The noiseless version of the problem, with exact line measurements, has been considered previously under the general title of parallel rigidity theory, mainly in order to characterize the conditions for unique realization of locations. For noisy pairwise line measurements, current methods tend to produce spurious solutions that are clustered around a few locations. This sensitivity of the location estimates is a well-known problem in SfM, especially for large, irregular collections of images. In this paper we introduce a semidefinite programming (SDP) formulation, specially tailored to overcome the clustering phenomenon. We further identify the implications of parallel rigidity theory for the location estimation problem to be well-posed, and prove exact (in the noiseless case) and stable location recovery results. We also formulate an alternating direction method to solve the resulting semidefinite program, and provide a distributed version of our formulation for large numbers of locations. Specifically for the camera location estimation problem, we formulate a pairwise line estimation method based on robust camera orientation and subspace estimation. Lastly, we demonstrate the utility of our algorithm through experiments on real images.Comment: 40 pages, 12 figures, 6 tables; notation and some unclear parts updated, some typos correcte

    Revisiting Hartley's normalized eight-point algorithm

    Get PDF
    Copyright Ā© 2003 IEEEHartley's eight-point algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eight-point algorithm that results from using normalized data. It is first established that the normalized algorithm acts to minimize a specific cost function. It is then shown that this cost function I!; statistically better founded than the cost function associated with the nonnormalized algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework.Wojciech Chojnacki, Michael J. Brooks, Anton van den Hengel and Darren Gawle
    • ā€¦
    corecore