121 research outputs found

    Camera Self-Calibration Using the Kruppa Equations and the SVD of the Fundamental Matrix: The Case of Varying Intrinsic Parameters

    Get PDF
    Estimation of the camera intrinsic calibration parameters is a prerequisite to a wide variety of vision tasks related to motion and stereo analysis. A major breakthrough related to the intrinsic calibration problem was the introduction in the early nineties of the autocalibration paradigm, according to which calibration is achieved not with the aid of a calibration pattern but by observing a number of image features in a set of successive images. Until recently, however, most research efforts have been focused on applying the autocalibration paradigm to estimating constant intrinsic calibration parameters. Therefore, such approaches are inapplicable to cases where the intrinsic parameters undergo continuous changes due to focusing and/or zooming. In this paper, our previous work for autocalibration in the case of constant camera intrinsic parameters is extended and a novel autocalibration method capable of handling variable intrinsic parameters is proposed. The method relies upon the Singular Value Decomposition of the fundamental matrix, which leads to a particularly simple form of the Kruppa equations. In contrast to the classical formulation that yields an over-determined system of constraints, a purely algebraic derivation is proposed here which provides a straightforward answer to the problem of determining which constraints to employ among the set of available ones. Additionally, the new formulation does not employ the epipoles, which are known to be difficult to estimate accurately. The intrinsic calibration parameters are recovered from the developed constraints through a nonlinear minimization scheme that explicitly takes into consideration the uncertainty associated with the estimates of the employed fundamental matrices. Detailed experimental results using both simulated and real image sequences demonstrate the feasibility of the approach

    Bir krank biyel mekanizmasının ön kalibreli, 2 ticari kamera kullanılarak geliştirilen görüş sistemi vasıtasıyla kinematik analizi.

    Get PDF
    There are two main objectives of this study. The first objective is to develop a vision system consisting of 2 inexpensive commercial cameras. In general, by self – calibration methods reconstruction of a scene by using uncalibrated images is performed up to a scale only. However, in this thesis reconstruction of a scene is to be performed such that one obtains the actual values of the distances in the scene. For this purpose, it is assumed that the extrinsic parameters of the cameras are known. Therefore, one needs to determine the intrinsic parameters of the cameras only. In order to calculate the intrinsic parameters, two methods, that take advantage of the simplified Kruppa equations and the equal eigenvalue theorem, are used. The results obtained via the two methods are compared with the results obtained by using a calibration pattern. A triangulation process is then performed to calculate several known distances in the scene by using the method that gives better results for the intrinsic parameters. The actual and estimated distances obtained via the vision system are then presented and compared. The second objective of this study is to perform kinematic analysis of a slider crank mechanism by using the developed vision system. The position, velocity and acceleration analyses of the slider crank mechanism are realized by using several markers that are attached on the moving links of the mechanism. The positions of the markers are calculated by using the vision system. This data is then utilized to determine the joint variables, joint velocities and joint accelerations of the slider crank. The results thus obtained via an encoder attached to the input link of the mechanism are compared with the results obtained via the developed vision system. The effects of the locations of the markers and the effects of the number of markers used on the accuracy of the results are also investigated.M.S. - Master of Scienc

    A Camera Self-Calibration Method Based on Plane Lattice and Orthogonality

    Get PDF
     The calibration using orthogonal line is one of the basic approaches of camera calibration, but it requires the orthogonal line be accurately detected, which makes results of error increases. This paper propose a novel camera self-calibration technique using plane lattices and virtual orthogonal line. The rigorous analytical relations among the feature point coordinates of the plane lattice, the corresponding image point coordinate, intrinsic parameters, relative pose are induced according to homography matrix of the central projection. Let a slope of non-parallel and non-orthogonal virtual line in the lattice plane, and the slope of its orthonormal line can be calculated. In at least three photographs taken, vanishing points can be solved in two groups of orthogonal directions by using the homography matrix, so the camera intrinsic parameters are linearly figured out. This method has both simple principle and convenient pattern manufacture, and does not involve image matching, besides having no requirement concerning camera motion. Simulation experiments and real data show that this algorithm is feasible, and provides a higher accuracy and robustness

    A Self-calibration Algorithm Based on a Unified Framework for Constraints on Multiple Views

    Get PDF
    In this paper, we propose a new self-calibration algorithm for upgrading projective space to Euclidean space. The proposed method aims to combine the most commonly used metric constraints, including zero skew and unit aspect-ratio by formulating each constraint as a cost function within a unified framework. Additional constraints, e.g., constant principal points, can also be formulated in the same framework. The cost function is very flexible and can be composed of different constraints on different views. The upgrade process is then stated as a minimization problem which may be solved by minimizing an upper bound of the cost function. This proposed method is non-iterative. Experimental results on synthetic data and real data are presented to show the performance of the proposed method and accuracy of the reconstructed scene. © 2012 The Author(s).published_or_final_versionSpringer Open Choice, 25 May 201

    Self-Calibrating Cameras Using Semidefinite Programming

    Get PDF
    Novel methods are proposed for self-calibrating a purerotating camera using semidefinite programming (SDP). Key to the approach is the use of the positive-definiteness requirement for the dual image of the absolute conic (DIAC). The problem is couched within a convex optimization framework and convergence to the global optimum is guaranteed. Experiments on various data sets indicate that the proposed algorithms more reliably deliver accurate and meaningful results. This work points the way to an alternative and more general approach to self-calibration using the advantageous properties of SDP. Algorithms are also discussed for cameras undergoing general motion

    Towards A Self-calibrating Video Camera Network For Content Analysis And Forensics

    Get PDF
    Due to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space. This thesis addresses the issue of how cameras in such a network can be calibrated and how the network as a whole can be calibrated, such that each camera as a unit in the network is aware of its orientation with respect to all the other cameras in the network. Different types of cameras might be present in a multiple camera network and novel techniques are presented for efficient calibration of these cameras. Specifically: (i) For a stationary camera, we derive new constraints on the Image of the Absolute Conic (IAC). These new constraints are shown to be intrinsic to IAC; (ii) For a scene where object shadows are cast on a ground plane, we track the shadows on the ground plane cast by at least two unknown stationary points, and utilize the tracked shadow positions to compute the horizon line and hence compute the camera intrinsic and extrinsic parameters; (iii) A novel solution to a scenario where a camera is observing pedestrians is presented. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians; (iv) For a freely moving camera, a novel practical method is proposed for its self-calibration which even allows it to change its internal parameters by zooming; and (v) due to the increased application of the pan-tilt-zoom (PTZ) cameras, a technique is presented that uses only two images to estimate five camera parameters. For an automatically configurable multi-camera network, having non-overlapping field of view and possibly containing moving cameras, a practical framework is proposed that determines the geometry of such a dynamic camera network. It is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the geometry of a dynamic network. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic as well as on real data. Applications to path modeling, GPS coordinate estimation, and configuring mixed-reality environment are explored

    Estimating intrinsic camera parameters from the fundamental matrix using an evolutionary approach

    Get PDF
    Calibration is the process of computing the intrinsic (internal) camera parameters from a series of images. Normally calibration is done by placing predefined targets in the scene or by having special camera motions, such as rotations. If these two restrictions do not hold, then this calibration process is called autocalibration because it is done automatically, without user intervention. Using autocalibration, it is possible to create 3D reconstructions from a sequence of uncalibrated images without having to rely on a formal camera calibration process. The fundamental matrix describes the epipolar geometry between a pair of images, and it can be calculated directly from 2D image correspondences. We show that autocalibration from a set of fundamental matrices can simply be transformed into a global minimization problem utilizing a cost function. We use a stochastic optimization approach taken from the field of evolutionary computing to solve this problem. A number of experiments are performed on published and standardized data sets that show the effectiveness of the approach. The basic assumption of this method is that the internal (intrinsic) camera parameters remain constant throughout the image sequence, that is, the images are taken from the same camera without varying such quantities as the focal length. We show that for the autocalibration of the focal length and aspect ratio, the evolutionary method achieves results comparable to published methods but is simpler to implement and is efficient enough to handle larger image sequences
    corecore