142 research outputs found

    Towards A Self-calibrating Video Camera Network For Content Analysis And Forensics

    Get PDF
    Due to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space. This thesis addresses the issue of how cameras in such a network can be calibrated and how the network as a whole can be calibrated, such that each camera as a unit in the network is aware of its orientation with respect to all the other cameras in the network. Different types of cameras might be present in a multiple camera network and novel techniques are presented for efficient calibration of these cameras. Specifically: (i) For a stationary camera, we derive new constraints on the Image of the Absolute Conic (IAC). These new constraints are shown to be intrinsic to IAC; (ii) For a scene where object shadows are cast on a ground plane, we track the shadows on the ground plane cast by at least two unknown stationary points, and utilize the tracked shadow positions to compute the horizon line and hence compute the camera intrinsic and extrinsic parameters; (iii) A novel solution to a scenario where a camera is observing pedestrians is presented. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians; (iv) For a freely moving camera, a novel practical method is proposed for its self-calibration which even allows it to change its internal parameters by zooming; and (v) due to the increased application of the pan-tilt-zoom (PTZ) cameras, a technique is presented that uses only two images to estimate five camera parameters. For an automatically configurable multi-camera network, having non-overlapping field of view and possibly containing moving cameras, a practical framework is proposed that determines the geometry of such a dynamic camera network. It is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the geometry of a dynamic network. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic as well as on real data. Applications to path modeling, GPS coordinate estimation, and configuring mixed-reality environment are explored

    Camera Self-Calibration Using the Kruppa Equations and the SVD of the Fundamental Matrix: The Case of Varying Intrinsic Parameters

    Get PDF
    Estimation of the camera intrinsic calibration parameters is a prerequisite to a wide variety of vision tasks related to motion and stereo analysis. A major breakthrough related to the intrinsic calibration problem was the introduction in the early nineties of the autocalibration paradigm, according to which calibration is achieved not with the aid of a calibration pattern but by observing a number of image features in a set of successive images. Until recently, however, most research efforts have been focused on applying the autocalibration paradigm to estimating constant intrinsic calibration parameters. Therefore, such approaches are inapplicable to cases where the intrinsic parameters undergo continuous changes due to focusing and/or zooming. In this paper, our previous work for autocalibration in the case of constant camera intrinsic parameters is extended and a novel autocalibration method capable of handling variable intrinsic parameters is proposed. The method relies upon the Singular Value Decomposition of the fundamental matrix, which leads to a particularly simple form of the Kruppa equations. In contrast to the classical formulation that yields an over-determined system of constraints, a purely algebraic derivation is proposed here which provides a straightforward answer to the problem of determining which constraints to employ among the set of available ones. Additionally, the new formulation does not employ the epipoles, which are known to be difficult to estimate accurately. The intrinsic calibration parameters are recovered from the developed constraints through a nonlinear minimization scheme that explicitly takes into consideration the uncertainty associated with the estimates of the employed fundamental matrices. Detailed experimental results using both simulated and real image sequences demonstrate the feasibility of the approach

    Using Geometric Constraints for Camera Calibration and Positioning and 3D Scene Modelling

    Get PDF
    International audienceThis work concerns the incorporation of geometric information in camera calibration and 3D modelling. Using geometric constraints enables stabler results and allows to perform tasks with fewer images. Our approach is interactive; the user defines geometric primitives and constraints between them. It is based on the observation that constraints such as coplanarity, parallelism or orthogonality, are easy to delineate by a user, and are well adapted to model the main structure of e.g. architectural scenes. We propose methods for camera calibration, camera position estimation and 3D scene reconstruction, all based on such geometric constraints. Various approaches exist for calibration and positioning from constraints, often based on vanishing points. We generalize this by considering composite primitives based on triplets of vanishing points. These are frequent in architectural scenes and considering composites of vanishing points makes computations more stable. They are defined by depicting in the images points belonging to parallelepipedic structures (e.g. appropriate points on two connected walls). Constraints on angles or length ratios on these structures can then be easily imposed. A method is proposed that "collects" all these data for all considered images, and computes simultaneously the calibration and pose of all cameras via matrix factorization. 3D scene reconstruction is then performed using many more geometric constraints, i.e. not only those encapsulated by parallelepipedic structures. A method is proposed that reconstructs the whole scene in iterations, solving a linear equation system at each iteration, and which includes an analysis of the parts of the scene that can/cannot be reconstructed at the current stage. The complete approach is validated by various experimental results, for cases where a single or several views are available
    corecore