75 research outputs found

    Recursive Motion and Structure Estimation with Complete Error Characterization

    Get PDF
    We present an algorithm that perfom recursive estimation of ego-motion andambient structure from a stream of monocular Perspective images of a number of feature points. The algorithm is based on an Extended Kalman Filter (EKF) that integrates over time the instantaneous motion and structure measurements computed by a 2-perspective-views step. Key features of our filter are (I) global observability of the model, (2) complete on-line characterization of the uncertainty of the measurements provided by the two-views step. The filter is thus guaranteed to be well-behaved regardless of the particular motion undergone by the observel: Regions of motion space that do not allow recovery of structure (e.g. pure rotation) may be crossed while maintaining good estimates of structure and motion; whenever reliable measurements are available they are exploited. The algorithm works well for arbitrary motions with minimal smoothness assumptions and no ad hoc tuning. Simulations are presented that illustrate these characteristics

    Self-Calibration of Cameras with Euclidean Image Plane in Case of Two Views and Known Relative Rotation Angle

    Full text link
    The internal calibration of a pinhole camera is given by five parameters that are combined into an upper-triangular 3×33\times 3 calibration matrix. If the skew parameter is zero and the aspect ratio is equal to one, then the camera is said to have Euclidean image plane. In this paper, we propose a non-iterative self-calibration algorithm for a camera with Euclidean image plane in case the remaining three internal parameters --- the focal length and the principal point coordinates --- are fixed but unknown. The algorithm requires a set of N7N \geq 7 point correspondences in two views and also the measured relative rotation angle between the views. We show that the problem generically has six solutions (including complex ones). The algorithm has been implemented and tested both on synthetic data and on publicly available real dataset. The experiments demonstrate that the method is correct, numerically stable and robust.Comment: 13 pages, 7 eps-figure

    Precise real-time outlier removal from motion vector fields for 3D reconstruction

    No full text
    Published versio

    A Camera Self-Calibration Method Based on Plane Lattice and Orthogonality

    Get PDF
     The calibration using orthogonal line is one of the basic approaches of camera calibration, but it requires the orthogonal line be accurately detected, which makes results of error increases. This paper propose a novel camera self-calibration technique using plane lattices and virtual orthogonal line. The rigorous analytical relations among the feature point coordinates of the plane lattice, the corresponding image point coordinate, intrinsic parameters, relative pose are induced according to homography matrix of the central projection. Let a slope of non-parallel and non-orthogonal virtual line in the lattice plane, and the slope of its orthonormal line can be calculated. In at least three photographs taken, vanishing points can be solved in two groups of orthogonal directions by using the homography matrix, so the camera intrinsic parameters are linearly figured out. This method has both simple principle and convenient pattern manufacture, and does not involve image matching, besides having no requirement concerning camera motion. Simulation experiments and real data show that this algorithm is feasible, and provides a higher accuracy and robustness

    Euclidean Structure from Uncalibrated Images

    Full text link

    Observability/Identifiability of Rigid Motion under Perspective Projection

    Get PDF
    The "visual motion" problem consists of estimating the motion of an object viewed under projection. In this paper we address the feasibility of such a problem. We will show that the model which defines the visual motion problem for feature points in the euclidean 3D space lacks of both linear and local (weak) observability. The locally observable manifold is covered with three levels of lie differentiations. Indeed, by imposing metric constraints on the state-space, it is possible to reduce the set of indistinguishable states. We will then analyze a model for visual motion estimation in terms of identification of an Exterior Differential System, with the parameters living on a topological manifold, called the "essential manifold", which includes explicitly in its definition the forementioned metric constraints. We will show that rigid motion is globally observable/identifiable under perspective projection with zero level of lie differentiation under some general position conditions. Such conditions hold when the viewer does not move on a quadric surface containing all the visible points

    Recursive Motion Estimation on the Essential Manifold

    Get PDF
    Visual motion estimation can be regarded as estimation of the state of a system of difference equations with unknown inputs defined on a manifold. Such a system happens to be "linear", but it is defined on a space (the so called "Essential manifold") which is not a linear (vector) space. In this paper we will introduce a novel perspective for viewing the motion estimation problem which results in three original schemes for solving it. The first consists in "flattening the space" and solving a nonlinear estimation problem on the flat (euclidean) space. The second approach consists in viewing the system as embedded in a larger euclidean space (the smallest of the embedding spaces), and solving at each step a linear estimation problem on a linear space, followed by a "projection" on the manifold (see fig. 5). A third "algebraic" formulation of motion estimation is inspired by the structure of the problem in local coordinates (flattened space), and consists in a double iteration for solving an "adaptive fixed-point" problem (see fig. 6). Each one of these three schemes outputs motion estimates together with the joint second order statistics of the estimation error, which can be used by any structure from motion module which incorporates motion error [20, 23] in order to estimate 3D scene structure. The original contribution of this paper involves both the problem formulation, which gives new insight into the differential geometric structure of visual motion estimation, and the ideas generating the three schemes. These are viewed within a unified framework. All the schemes have a strong theoretical motivation and exhibit accuracy, speed of convergence, real time operation and flexibility which are superior to other existing schemes [1, 20, 23]. Simulations are presented for real and synthetic image sequences to compare the three schemes against each other and highlight the peculiarities of each one

    Revisiting Hartley's normalized eight-point algorithm

    Get PDF
    Copyright © 2003 IEEEHartley's eight-point algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eight-point algorithm that results from using normalized data. It is first established that the normalized algorithm acts to minimize a specific cost function. It is then shown that this cost function I!; statistically better founded than the cost function associated with the nonnormalized algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework.Wojciech Chojnacki, Michael J. Brooks, Anton van den Hengel and Darren Gawle
    corecore