7 research outputs found

    Clustering Assisted Fundamental Matrix Estimation

    Full text link
    In computer vision, the estimation of the fundamental matrix is a basic problem that has been extensively studied. The accuracy of the estimation imposes a significant influence on subsequent tasks such as the camera trajectory determination and 3D reconstruction. In this paper we propose a new method for fundamental matrix estimation that makes use of clustering a group of 4D vectors. The key insight is the observation that among the 4D vectors constructed from matching pairs of points obtained from the SIFT algorithm, well-defined cluster points tend to be reliable inliers suitable for fundamental matrix estimation. Based on this, we utilizes a recently proposed efficient clustering method through density peaks seeking and propose a new clustering assisted method. Experimental results show that the proposed algorithm is faster and more accurate than currently commonly used methods.Comment: 12 pages, 8 figures, 3 tables, Second International Conference on Computer Science and Information Technology (COSIT 2015) March 21~22, 2015, Geneva, Switzerlan

    Markerless Tracking Using Polar Correlation Of Camera Optical Flow

    Get PDF
    We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom including direction of translation and angular velocity. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80% over 185 frames, and it is able to track large range motions even in outdoor settings. We also present how opposing cameras in vision-based inside-looking-out systems can be used for gesture recognition. To demonstrate our approach, we discuss three different algorithms for recovering motion parameters at different levels of complete recovery. We show how optical flow in opposing cameras can be used to recover motion parameters of the multi-camera rig. Experimental results show gesture recognition accuracy of 88.0%, 90.7% and 86.7% for our three techniques, respectively, across a set of 15 gestures

    Real-Time Virtual Viewpoint Generation on the GPU for Scene Navigation

    Full text link

    Estimation of the epipole using optical flow at antipodal points

    No full text
    We present algorithms for estimating the epipole or direction of translation of a moving camera. We use constraints arising from two points that are antipodal on the image sphere in order to decouple rotation from translation. One pair of antipodal points constrains the epipole to lie on a plane, and two such pairs will correspondingly give two planes. The intersection of these two planes is an estimate of the epipole. This means we require image motion measurements at two pairs of antipodal points to obtain an estimate. Two classes of algorithms are possible and we present two simple yet extremely robust algorithms representative of each class. These are shown to have comparable accuracy with the state of the art when tested in simulation under noise and with real image sequences
    corecore