7 research outputs found

    Overviews of Optimization Techniques for Geometric Estimation

    Get PDF
    We summarize techniques for optimal geometric estimation from noisy observations for computer vision applications. We first discuss the interpretation of optimality and point out that geometric estimation is different from the standard statistical estimation. We also describe our noise modeling and a theoretical accuracy limit called the KCR lower bound. Then, we formulate estimation techniques based on minimization of a given cost function: least squares (LS), maximum likelihood (ML), which includes reprojection error minimization as a special case, and Sampson error minimization. We describe bundle adjustment and the FNS scheme for numerically solving them and the hyperaccurate correction that improves the accuracy of ML. Next, we formulate estimation techniques not based on minimization of any cost function: iterative reweight, renormalization, and hyper-renormalization. Finally, we show numerical examples to demonstrate that hyper-renormalization has higher accuracy than ML, which has widely been regarded as the most accurate method of all. We conclude that hyper-renormalization is robust to noise and currently is the best method

    Optimal Computation of 3-D Rotation under Inhomogeneous Anisotropic Noise

    Get PDF
    We present a new method for optimally computing the 3-D rotation from two sets of 3-D data. Unlike 2-D data, the noise in 3-D data is inherently inhomogeneous and anisotropic, reflecting the characteristics of the 3-D sensing used. To cope with this, Ohta and Kanatani introduced a technique called ā€œrenormalizationā€. Following them, we represent a 3-D rotation in terms of a quaternion and compute an exact maximum likelihood solution using the FNS of Chojnacki et al. As an example, we consider 3-D data obtained by stereo vision and optimally compute the 3-D rotation by analyzing the noise characteristics of stereo reconstruction. We show that the widely used method is not suitable for 3-D data. We confirm that the renormalization of Ohta and Kanatani indeed computes almost an optimal solution and that, although the difference is small, the proposed method can compute an even better solution

    Tracking Extended Objects in Noisy Point Clouds with Application in Telepresence Systems

    Get PDF
    We discuss theory and application of extended object tracking. This task is challenging as sensor noise prevents a correct association of the measurements to their sources on the object, the shape itself might be unknown a priori, and due to occlusion effects, only parts of the object are visible at a given time. We propose an approach to track the parameters of arbitrary objects, which provides new solutions to the above challenges, and marks a significant advance to the state of the art

    Estimation of nonlinear errors-in-variables models for computer vision applications

    No full text
    Abstractā€”In an errors-in-variables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer vision problems. We show that the estimation of such nonlinear EIV models can be reduced to iteratively estimating a linear model having point dependent, i.e., heteroscedastic, noise process. Particular cases of the proposed heteroscedastic errors-in-variables (HEIV) estimator are related to other techniques described in the vision literature: the Sampson method, renormalization, and the fundamental numerical scheme. In a wide variety of tasks, the HEIV estimator exhibits the same, or superior, performance as these techniques and has a weaker dependence on the quality of the initial solution than the Levenberg-Marquardt method, the standard approach toward estimating nonlinear models. Index Termsā€”Nonlinear least squares, heteroscedastic regression, camera calibration, 3D rigid motion, uncalibrated vision. 1 MODELING COMPUTER VISION PROBLEMS SOLVING most computer vision problems requires the estimation of a set of parameters from noisy measurements using a statistical model. A statistical model provides a mathematical description of a problem in terms of a constraint equation relating the measurements to th

    Tracking Extended Objects in Noisy Point Clouds with Application in Telepresence Systems

    Get PDF
    We discuss theory and application of extended object tracking. This task is challenging as sensor noise prevents a correct association of the measurements to their sources on the object, the shape itself might be unknown a priori, and due to occlusion effects, only parts of the object are visible at a given time. We propose an approach to track the parameters of arbitrary objects, which provides new solutions to the above challenges, and marks a significant advance to the state of the art

    Probabilistic Feature-Based Registration for Interventional Medicine

    Get PDF
    The need to compute accurate spatial alignment between multiple representations of patient anatomy is a problem that is fundamental to many applications in computer-integrated interventional medicine. One class of methods for computing such alignments is feature-based registration, which aligns geometric information of the shapes being registered, such as salient landmarks or models of shape surfaces. A popular algorithm for surface-based registration is the Iterative Closest Point (ICP) algorithm, which treats one shape as a cloud of points that is registered to a second shape by iterating between point-correspondence and point-registration phases until convergence. In this dissertation, a class of "most likely point" variants on the ICP algorithm is developed that offers several advantages over ICP, such as high registration accuracy and the ability to confidently assess the quality of a registration outcome. The proposed algorithms are based on a probabilistic interpretation of the registration problem, wherein the point-correspondence and point-registration phases optimize the probability of shape alignment based on feature uncertainty models rather than minimizing the Euclidean distance between the shapes as in ICP. This probabilistic framework is used to model anisotropic errors in the shape measurements and to provide a natural context for incorporating oriented-point data into the registration problem, such as shape surface normals. The proposed algorithms are evaluated through a range of simulation-, phantom-, and clinical-based studies, which demonstrate significant improvement in registration outcomes relative to ICP and state-of-the-art methods
    corecore