2 research outputs found

    Interpreting Sphere Images Using the Double-Contact Theorem

    Full text link
    An occluding contour of a sphere is projected to a conic in the perspective image, and such a conic is called a sphere image. Recently, it has been discovered that each sphere image is tangent to the image of the absolute conic at two double-contact image points. The double-contact theorem describes the properties of three conics which all have double contact with another conic. This theorem provides a linear algorithm to find the another conic if these three conics are given. In this paper, the double-contact theorem is employed to interpret the properties among three sphere images and the image of the absolute conic. The image of the absolute conic can be determined from three sphere images using the double-contact theorem. Therefore, a linear calibration method using three sphere images is obtained. Only three sphere images are required, and all five intrinsic parameters are recovered linearly without making assumptions, such as, zero-skew or unitary aspect ratio. Extensive experiments on simulated and real data were performed and shown that our calibration method is an order of magnitude faster than previous optimized methods and a little faster than former linear methods while maintaining comparable accuracy.http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000235772300073&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=8e1609b174ce4e31116a60747a720701Computer Science, Artificial IntelligenceComputer Science, Theory & MethodsSCI(E)CPCI-S(ISTP)

    Extrinsic calibration of camera networks using a sphere

    Get PDF
    In this paper, we propose a novel extrinsic calibration method for camera networks using a sphere as the calibration object. First of all, we propose an easy and accurate method to estimate the 3D positions of the sphere center w.r.t. the local camera coordinate system. Then, we propose to use orthogonal procrustes analysis to pairwise estimate the initial camera relative extrinsic parameters based on the aforementioned estimation of 3D positions. Finally, an optimization routine is applied to jointly refine the extrinsic parameters for all cameras. Compared to existing sphere-based 3D position estimators which need to trace and analyse the outline of the sphere projection in the image, the proposed method requires only very simple image processing: estimating the area and the center of mass of the sphere projection. Our results demonstrate that we can get a more accurate estimate of the extrinsic parameters compared to other sphere-based methods. While existing state-of-the-art calibration methods use point like features and epipolar geometry, the proposed method uses the sphere-based 3D position estimate. This results in simpler computations and a more flexible and accurate calibration method. Experimental results show that the proposed approach is accurate, robust, flexible and easy to use
    corecore