1,710 research outputs found

    Towards A Self-calibrating Video Camera Network For Content Analysis And Forensics

    Get PDF
    Due to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space. This thesis addresses the issue of how cameras in such a network can be calibrated and how the network as a whole can be calibrated, such that each camera as a unit in the network is aware of its orientation with respect to all the other cameras in the network. Different types of cameras might be present in a multiple camera network and novel techniques are presented for efficient calibration of these cameras. Specifically: (i) For a stationary camera, we derive new constraints on the Image of the Absolute Conic (IAC). These new constraints are shown to be intrinsic to IAC; (ii) For a scene where object shadows are cast on a ground plane, we track the shadows on the ground plane cast by at least two unknown stationary points, and utilize the tracked shadow positions to compute the horizon line and hence compute the camera intrinsic and extrinsic parameters; (iii) A novel solution to a scenario where a camera is observing pedestrians is presented. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians; (iv) For a freely moving camera, a novel practical method is proposed for its self-calibration which even allows it to change its internal parameters by zooming; and (v) due to the increased application of the pan-tilt-zoom (PTZ) cameras, a technique is presented that uses only two images to estimate five camera parameters. For an automatically configurable multi-camera network, having non-overlapping field of view and possibly containing moving cameras, a practical framework is proposed that determines the geometry of such a dynamic camera network. It is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the geometry of a dynamic network. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic as well as on real data. Applications to path modeling, GPS coordinate estimation, and configuring mixed-reality environment are explored

    Self-Calibrating Cameras Using Semidefinite Programming

    Get PDF
    Novel methods are proposed for self-calibrating a purerotating camera using semidefinite programming (SDP). Key to the approach is the use of the positive-definiteness requirement for the dual image of the absolute conic (DIAC). The problem is couched within a convex optimization framework and convergence to the global optimum is guaranteed. Experiments on various data sets indicate that the proposed algorithms more reliably deliver accurate and meaningful results. This work points the way to an alternative and more general approach to self-calibration using the advantageous properties of SDP. Algorithms are also discussed for cameras undergoing general motion

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    Camera calibration in sport event scenarios

    Get PDF
    The main goal of this paper is the design of a novel and robust methodology for calibrating cameras from a single image in sport scenarios, such as a soccer field, or a basketball or tennis court. In these sport scenarios, the only references we use to calibrate the camera are the lines and circles delimiting the different regions. The first problem we address is the extraction of image primitives, including the challenging problems of shaded regions and lens distortion. From these primitives, we automatically recognise the location of the sport court in the scene by estimating the homography which matches the actual court with its projection onto the image. This is achieved even when only a few primitives are available. Finally, from this homography, we recover the camera calibration parameters. In particular, we estimate the focal length as well as the position and orientation in the 3D space. We present some experiments on models and real courts which illustrate the accuracy of the proposed methodology

    3D Reconstruction with Uncalibrated Cameras Using the Six-Line Conic Variety

    Full text link
    We present new algorithms for the recovery of the Euclidean structure from a projective calibration of a set of cameras with square pixels but otherwise arbitrarily varying intrinsic and extrinsic parameters. Our results, based on a novel geometric approach, include a closed-form solution for the case of three cameras and two known vanishing points and an efficient one-dimensional search algorithm for the case of four cameras and one known vanishing point. In addition, an algorithm for a reliable automatic detection of vanishing points on the images is presented. These techniques fit in a 3D reconstruction scheme oriented to urban scenes reconstruction. The satisfactory performance of the techniques is demonstrated with tests on synthetic and real data

    Zoom techniques for achieving scale invariant object tracking in real-time active vision systems

    Get PDF
    In a surveillance system, a camera operator follows an object of interest by moving the camera, then gains additional information about the object by zooming. As the active vision field advances, the ability to automate such a system is nearing fruition. One hurdle limiting the use of object recognition algorithms in real-time systems is the quality of captured imagery; recognition algorithms often have strict scale and position requirements where if those parameters are not met, the performance rapidly degrades to failure. The ability of an automatic fixation system to capture quality video of an accelerating target is directly related to the response time of the mechanical pan, tilt, and zoom platform—however the price of such a platform rises with its performance. The goal of this work is to create a system that provides scale-invariant tracking using inexpensive off-the-shelf components. Since optical zoom acts as a measurement gain, amplifying both resolution and tracking error, a new second camera with fixed focal length assists the zooming camera if it loses fixation—effectively clipping error. Furthermore, digital zoom adjusts the captured image to ensure position and scale invariance for the higher-level application. The implemented system uses two Sony EVI-D100 cameras on a 2.8GHz Dual Pentium Xeon PC. This work presents experiments to exhibit the effectiveness of the system

    Conjugate epipole-based self-calibration of camera under circular motion

    Get PDF
    In this paper, we propose a new method to self-calibrate camera with constant internal parameters under circular motion. The basis of our approach is to make use of the conjugate epipoles which are related to camera positions with rotation angles satisfying the conjugate constraint. A novel circular projective reconstruction is developed for computing the conjugate epipoles robustly. It is shown that for a camera with zero skew, two turntable sequences with different camera orientations are needed, and for a general camera three sequences with different camera orientations are required. The performance of the algorithm is tested with real images.published_or_final_versio
    • …
    corecore