12 research outputs found

    Automatic Detection of Circular Objects by Ellipse Growing

    Get PDF
    We present a new method for automatically detecting circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space is restricted to one and two dimensions for efficiency, and special weighting schemes are introduced to enhance the accuracy. We demonstrate the effectiveness of our method using real images. Finally, we apply our method to the calibration of a turntable for 3-D object shape reconstruction

    From FNS to HEIV: A link between two vision parameter estimation methods

    Get PDF
    Copyright © 2004 IEEEProblems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essentially equivalent, solving a common underlying equation via different means. The analysis is driven by the search for a nondegenerate form of a certain generalized eigenvalue problem and effectively leads to a new derivation of the relevant case of the HEIV algorithm. This work may be seen as an extension of previous efforts to rationalize and interrelate a spectrum of estimators, including the renormalization method of Kanatani and the normalized eight-point method of Hartley.Wojciech Chojnacki, Michael J. Brooks, Anton van den Hengel, and Darren Gawle

    A statistical rationalisation of Hartley's normalised eight-point algorithm

    Get PDF
    ©2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.The eight-point algorithm of Hartley occupies an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eight-point algorithm that results from using normalised data. A first step is singling out a cost function that the normalised algorithm acts to minimise. The cost function is then shown to be statistically better founded than the cost function associated with the non-normalised algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework.Wojciech Chojnacki, Michael J. Brooks, Anton van den Hengel, Darren Gawle

    Revisiting Hartley's normalized eight-point algorithm

    Get PDF
    Copyright © 2003 IEEEHartley's eight-point algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eight-point algorithm that results from using normalized data. It is first established that the normalized algorithm acts to minimize a specific cost function. It is then shown that this cost function I!; statistically better founded than the cost function associated with the nonnormalized algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework.Wojciech Chojnacki, Michael J. Brooks, Anton van den Hengel and Darren Gawle

    Evaluation and Selection of Models for Motion Segmentation

    Get PDF
    We present a theoretically optimal linear algorithm for 3-D reconstruction from point correspondences over two views. We also present a similarly constructed optimal linear algorithm for 3-D reconstruction from optical flow. We then compare the performance of the two algorithms by simulation and real-image experiments using the same data. This is the first impartial comparison ever done in the sense that the two algorithms are both optimal, extracting the information contained in the data to a maximum possible degree. We observe that the finite motion solution is always superior to the optical flow solution and conclude that the finite motion algorithm should be used for 3-D reconstruction

    Implementing a multi-model estimation method

    Get PDF
    This work is realized within the scope of a general attempt to understand parametric adaptation, regarding visual perception. The key idea is to analyze how we may use multi-model parametric estimation as a 1st step towards categorization. More generally, the goal is to formalize how the notion of ``objects'' or ``events'' in an application may be reduced to a choice in a hierarchy of parametric models used to estimate the underlying data categorization. These mechanisms are to be linked with what occurs in the cerebral cortex where object recognition corresponds to a parametric neuronal estimation (see for instanced Page 2000 for a discussion and Freedman et al 2001 for an example regarding the primate visual cortex). We thus hope to bring here an algorithmic element in relation with the ``grand-ma'' neuron modelization. We thus revisit the problem of parameter estimation in computer vision, presented here as a simple optimization problem, considering (i) non-linear implicit measurement equations and parameter constraints, plus (ii) robust estimation in the presence of outliers and (iii) multi-model comparisons. Here, (1) a projection algorithm based on generalizations of square-root decompositions allows an efficient and numerically stable local resolution of a set of non-linear equations. On the other hand, (2) a robust estimation module of a hierarchy of non-linear models has been designed and validated. A step ahead, the software architecture of the estimation module is discussed with the goal of being integrated in reactive software environments or within applications with time constraints

    Uncertainty Modeling and Geometric Inference

    Get PDF
    We investigate the meaning of "statistical methods" for geometric inference based on image feature points. Tracing back the origin of feature uncertainty to image processing operations, we discuss the implications of asymptotic analysis in reference to "geometric fitting" and "geometric model selection", We point out that a correspondence exists between the standard statistical analysis and the geometric inference problem. We also compare the capability of the "geometric AIC" and the "geometric MDL' in detecting degeneracy. Next, we review recent progress in geometric fitting techniques for linear constraints, describing the "FNS method", the "HEIV method", the "renormalization method", and other related techniques. Finally, we discuss the "Neyman-Scott problem" and "semiparametric models" in relation to geometric inference. We conclude that applications of statistical methods requires careful considerations about the nature of the problem in question

    Implementing a multi-model estimation method

    Get PDF
    This work is realized within the scope of a general attempt to understand parametric adaptation, regarding visual perception. The key idea is to analyze how we may use multi-model parametric estimation as a 1st step towards categorization. More generally, the goal is to formalize how the notion of ``objects'' or ``events'' in an application may be reduced to a choice in a hierarchy of parametric models used to estimate the underlying data categorization. These mechanisms are to be linked with what occurs in the cerebral cortex where object recognition corresponds to a parametric neuronal estimation (see for instanced Page 2000 for a discussion and Freedman et al 2001 for an example regarding the primate visual cortex). We thus hope to bring here an algorithmic element in relation with the ``grand-ma'' neuron modelization. We thus revisit the problem of parameter estimation in computer vision, presented here as a simple optimization problem, considering (i) non-linear implicit measurement equations and parameter constraints, plus (ii) robust estimation in the presence of outliers and (iii) multi-model comparisons. Here, (1) a projection algorithm based on generalizations of square-root decompositions allows an efficient and numerically stable local resolution of a set of non-linear equations. On the other hand, (2) a robust estimation module of a hierarchy of non-linear models has been designed and validated. A step ahead, the software architecture of the estimation module is discussed with the goal of being integrated in reactive software environments or within applications with time constraints

    Rationalising the renormalisation method of Kanatani

    No full text
    The original publication can be found at www.springerlink.comThe renormalisation technique of Kanatani is intended to iteratively minimise a cost function of a certain form while avoiding systematic bias inherent in the common method of minimisation due to Sampson. Within the computer vision community, the technique has generally proven difficult to absorb. This work presents an alternative derivation of the technique, and places it in the context of other approaches. We first show that the minimiser of the cost function must satisfy a special variational equation. A Newton-like, fundamental numerical scheme is presented with the property that its theoretical limit coincides with the minimiser. Standard statistical techniques are then employed to derive afresh several renormalisation schemes. The fundamental scheme proves pivotal in the rationalising of the renormalisation and other schemes, and enables us to show that the renormalisation schemes do not have as their theoretical limit the desired minimiser. The various minimisation schemes are finally subjected to a comparative performance analysis under controlled conditions.Wojciech Chojnacki, Michael J. Brooks and Anton Van Den Henge
    corecore