3 research outputs found

    A feature-based approach for monocular camera tracking in unknown environments

    Get PDF
    © 2017 IEEE. Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method

    Camera pose estimation in unknown environments using a sequence of wide-baseline monocular images

    Get PDF
    In this paper, a feature-based technique for the camera pose estimation in a sequence of wide-baseline images has been proposed. Camera pose estimation is an important issue in many computer vision and robotics applications, such as, augmented reality and visual SLAM. The proposed method can track captured images taken by hand-held camera in room-sized workspaces with maximum scene depth of 3-4 meters. The system can be used in unknown environments with no additional information available from the outside world except in the first two images that are used for initialization. Pose estimation is performed using only natural feature points extracted and matched in successive images. In wide-baseline images unlike consecutive frames of a video stream, displacement of the feature points in consecutive images is notable and hence cannot be traced easily using patch-based methods. To handle this problem, a hybrid strategy is employed to obtain accurate feature correspondences. In this strategy, first initial feature correspondences are found using similarity of their descriptors and then outlier matchings are removed by applying RANSAC algorithm. Further, to provide a set of required feature matchings a mechanism based on sidelong result of robust estimator was employed. The proposed method is applied on indoor real data with images in VGA quality (640×480 pixels) and on average the translation error of camera pose is less than 2 cm which indicates the effectiveness and accuracy of the proposed approach

    A recursive camera resectioning technique for off-line video-based augmented reality

    No full text
    In this paper, we propose a new recursive framework for camera resectioning and apply it to off-line video-based augmented reality. Our method is based on an unscented particle filter and an independent Metropolis-Hastings chain, which deal with nonlinear dynamic systems without local linearization, and lead to more accurate results than other nonlinear filters. The proposed method has some desirable properties for camera resectioning: Since it does not rely on erroneous linear solutions, initialization problems do not occur, in contrast to the previous resectioning methods. Jittering error can be reduced by considering consistency and coherency between adjacent frames in our recursive framework. Our method is fairly accurate comparable to nonlinear optimization methods, which in general have higher levels of computation and complexity. As a result, the proposed algorithm outperforms the standard camera resectioning algorithm. We verify the effectiveness of our method through several experiments using synthetic and real image sequences comparing the estimation performance with other linear and nonlinear methods. (c) 2006 Elsevier B.V. All rights reserved.X113sciescopu
    corecore