426 research outputs found

    Camera calibration from a translation + planar motion

    Get PDF
    This paper addresses the problem of camera calibration by exploiting image invariants under camera/object rotation. A novel translation + planar motion is studied here. The 3 × 3 homography mapping corresponding points before and after the motion is exploited to obtain image invariants under perspective projection. The homography is found to form a "rotation conic" under different rotation angles. Apart from the imaged circular points, this conic can also be exploited to find the vanishing point of the rotation axis and this provides extra constraints for camera calibration. A square calibration pattern, which is invariant under a rotation about its center by multiples of π/2 radians, is introduced as a special instantiation of the translation + planar motion. Experiments on synthetic and real data show good precisions in calibration results.postprintThe 8th IASTED International Conference on Signal and Image Processing (SIP 2006), Honolulu, HI., 14-16 August, 2006. In Proceedings of the 8th IASTED International Conference on Signal and Image Processing, 2006, p. 195-20

    Towards A Self-calibrating Video Camera Network For Content Analysis And Forensics

    Get PDF
    Due to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space. This thesis addresses the issue of how cameras in such a network can be calibrated and how the network as a whole can be calibrated, such that each camera as a unit in the network is aware of its orientation with respect to all the other cameras in the network. Different types of cameras might be present in a multiple camera network and novel techniques are presented for efficient calibration of these cameras. Specifically: (i) For a stationary camera, we derive new constraints on the Image of the Absolute Conic (IAC). These new constraints are shown to be intrinsic to IAC; (ii) For a scene where object shadows are cast on a ground plane, we track the shadows on the ground plane cast by at least two unknown stationary points, and utilize the tracked shadow positions to compute the horizon line and hence compute the camera intrinsic and extrinsic parameters; (iii) A novel solution to a scenario where a camera is observing pedestrians is presented. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians; (iv) For a freely moving camera, a novel practical method is proposed for its self-calibration which even allows it to change its internal parameters by zooming; and (v) due to the increased application of the pan-tilt-zoom (PTZ) cameras, a technique is presented that uses only two images to estimate five camera parameters. For an automatically configurable multi-camera network, having non-overlapping field of view and possibly containing moving cameras, a practical framework is proposed that determines the geometry of such a dynamic camera network. It is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the geometry of a dynamic network. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic as well as on real data. Applications to path modeling, GPS coordinate estimation, and configuring mixed-reality environment are explored

    Autocalibration with the Minimum Number of Cameras with Known Pixel Shape

    Get PDF
    In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.Comment: 19 pages, 14 figures, 7 tables, J. Math. Imaging Vi

    Methods and Geometry for Plane-Based Self-Calibration

    Get PDF
    International audienceWe consider the problem of camera self-calibration from images of a planar object with unknown Euclidean structure. The general case of possibly varying focal length is addressed. This problem is nonlinear in general. One of our contributions is a nonlinear approach that makes abstraction of the (possibly varying) focal length, resulting in a computationally efficient algorithm. In addition, it does not require a good initial estimate of the focal length, unlike previous approaches. As for the initialization of other parameters, we propose a practical approach that simply requires taking one image in roughly fronto-parallel position. Closed-form solutions for various configurations of unknown intrinsic parameters are provided. Our methods are evaluated and compared to previous approaches using simulated and real images. Besides our practical contributions, we also provide a detailed geometrical interpretation of the principles underlying our approach

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    Self-calibration of turntable sequences from silhouettes

    Get PDF
    This paper addresses the problem of recovering both the intrinsic and extrinsic parameters of a camera from the silhouettes of an object in a turntable sequence. Previous silhouette-based approaches have exploited correspondences induced by epipolar tangents to estimate the image invariants under turntable motion and achieved a weak calibration of the cameras. It is known that the fundamental matrix relating any two views in a turntable sequence can be expressed explicitly in terms of the image invariants, the rotation angle, and a fixed scalar. It will be shown that the imaged circular points for the turntable plane can also be formulated in terms of the same image invariants and fixed scalar. This allows the imaged circular points to be recovered directly from the estimated image invariants, and provide constraints for the estimation of the imaged absolute conic. The camera calibration matrix can thus be recovered. A robust method for estimating the fixed scalar from image triplets is introduced, and a method for recovering the rotation angles using the estimated imaged circular points and epipoles is presented. Using the estimated camera intrinsics and extrinsics, a Euclidean reconstruction can be obtained. Experimental results on real data sequences are presented, which demonstrate the high precision achieved by the proposed method. © 2009 IEEE.published_or_final_versio

    1D camera geometry and its application to the self-calibration of circular motion sequences

    Get PDF
    This paper proposes a novel method for robustly recovering the camera geometry of an uncalibrated image sequence taken under circular motion. Under circular motion, all the camera centers lie on a circle and the mapping from the plane containing this circle to the horizon line observed in the image can be modelled as a 1D projection. A 2×2 homography is introduced in this paper to relate the projections of the camera centers in two 1D views. It is shown that the two imaged circular points of the motion plane and the rotation angle between the two views can be derived directly from such a homography. This way of recovering the imaged circular points and rotation angles is intrinsically a multiple view approach, as all the sequence geometry embedded in the epipoles is exploited in the estimation of the homography for each view pair. This results in a more robust method compared to those computing the rotation angles using adjacent views only. The proposed method has been applied to self-calibrate turntable sequences using either point features or silhouettes, and highly accurate results have been achieved. © 2008 IEEE.published_or_final_versio

    Pose Invariant Gait Analysis And Reconstruction

    Get PDF
    One of the unique advantages of human gait is that it can be perceived from a distance. A varied range of research has been undertaken within the field of gait recognition. However, in almost all circumstances subjects have been constrained to walk fronto-parallel to the camera with a single walking speed. In this thesis we show that gait has sufficient properties that allows us to exploit the structure of articulated leg motion within single view sequences, in order to remove the unknown subject pose and reconstruct the underlying gait signature, with no prior knowledge of the camera calibration. Articulated leg motion is approximately planar, since almost all of the perceived motion is contained within a single limb swing plane. The variation of motion out of this plane is subtle and negligible in comparison to this major plane of motion. Subsequently, we can model human motion by employing a cardboard person assumption. A subject's body and leg segments may be represented by repeating spatio-temporal motion patterns within a set of bilaterally symmetric limb planes. The static features of gait are defined as quantities that remain invariant over the full range of walking motions. In total, we have identified nine static features of articulated leg motion, corresponding to the fronto-parallel view of gait, that remain invariant to the differences in the mode of subject motion. These features are hypothetically unique to each individual, thus can be used as suitable parameters for biometric identification. We develop a stratified approach to linear trajectory gait reconstruction that uses the rigid bone lengths of planar articulated leg motion in order to reconstruct the fronto-parallel view of gait. Furthermore, subject motion commonly occurs within a fixed ground plane and is imaged by a static camera. In general, people tend to walk in straight lines with constant velocity. Imaged gait can then be split piecewise into natural segments of linear motion. If two or more sufficiently different imaged trajectories are available then the calibration of the camera can be determined. Subsequently, the total pattern of gait motion can be globally parameterised for all subjects within an image sequence. We present the details of a sparse method that computes the maximum likelihood estimate of this set of parameters, then conclude with a reconstruction error analysis corresponding to an example image sequence of subject motion

    Global axis shape of magnetic clouds deduced from the distribution of their local axis orientation

    Get PDF
    Coronal mass ejections (CMEs) are routinely tracked with imagers in the interplanetary space while magnetic clouds (MCs) properties are measured locally by spacecraft. However, both imager and insitu data do not provide direct estimation on the global flux rope properties. The main aim of this study is to constrain the global shape of the flux rope axis from local measurements, and to compare the results from in-situ data with imager observations. We perform a statistical analysis of the set of MCs observed by WIND spacecraft over 15 years in the vicinity of Earth. With the hypothesis of having a sample of MCs with a uniform distribution of spacecraft crossing along their axis, we show that a mean axis shape can be derived from the distribution of the axis orientation. In complement, while heliospheric imagers do not typically observe MCs but only their sheath region, we analyze one event where the flux-rope axis can be estimated from the STEREO imagers. From the analysis of a set of theoretical models, we show that the distribution of the local axis orientation is strongly affected by the global axis shape. Next, we derive the mean axis shape from the integration of the observed orientation distribution. This shape is robust as it is mostly determined from the global shape of the distribution. Moreover, we find no dependence on the flux-rope inclination on the ecliptic. Finally, the derived shape is fully consistent with the one derived from heliospheric imager observations of the June 2008 event. We have derived a mean shape of MC axis which only depends on one free parameter, the angular separation of the legs (as viewed from the Sun). This mean shape can be used in various contexts such as the study of high energy particles or space weather forecast.Comment: 13 pages, 12 figure
    corecore