778 research outputs found

    Hierarchical structure-and-motion recovery from uncalibrated images

    Full text link
    This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D struc- ture from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI

    Object Detection and Tracking Using Uncalibrated Cameras

    Get PDF
    This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object\u27s feature points over frames, tracking the object over frames and analyzing object\u27s motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms

    Object Detection and Tracking Using Uncalibrated Cameras

    Get PDF
    This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object\u27s feature points over frames, tracking the object over frames and analyzing object\u27s motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms

    Reconstruction of the pose of uncalibrated cameras via user-generated videos

    Get PDF
    Extraction of 3D geometry from hand-held unsteady uncalibrated cameras faces multiple difficulties: finding usable frames, feature-matching and unknown variable focal length to name three. We have built a prototype system to allow a user to spatially navigate playback viewpoints of an event of interest, using geometry automatically recovered from casually captured videos. The system, whose workings we present in this paper, necessarily estimates not only scene geometry, but also relative viewpoint position, overcoming the mentioned difficulties in the process. The only inputs required are video sequences from various viewpoints of a common scene, as are readily available online from sporting and music events. Our methods make no assumption of the synchronization of the input and do not require file metadata, instead exploiting the video to self-calibrate. The footage need only contain some camera rotation with little translation—for hand-held event footage a likely occurrence.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1145/2659021.265902

    Three dimensional information estimation and tracking for moving objects detection using two cameras framework

    Get PDF
    Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects
    corecore