12 research outputs found

    Outlier detection in video sequences under affine projection

    Get PDF
    A novel robust method for outlier detection in structure and motion recovery for affine cameras is presented. It is an extension of the well-known Tomasi-Kanade factorization technique (C. Tomasi T. Kanade, 1992) designed to handle outliers. It can also be seen as an importation of the LMedS technique or RANSAC into the factorization framework. Based on the computation of distances between subspaces, it relates closely with the subspace-based factorization methods for the perspective case presented by G. Sparr (1996) and others and the subspace-based-factorization for affine cameras with missing data by D. Jacobs (1997). Key features of the method presented are its ability to compare different subspaces and the complete automation of the detection and elimination of outliers. Its performance and effectiveness are demonstrated by experiments involving simulated and real video sequences

    Auto-calibration via the absolute quadric and scene constraints

    Get PDF
    A scheme is described for incorporation of scene constraints into the structure from motion problem. Specifically, the absolute quadric is recovered with constraints imposed by orthogonal scene planes. The scheme involves a number of steps. A projective reconstruction is first obtained, followed by a linear technique to form an initial estimate of the absolute quadric. A nonlinear iteration then refines this quadric and the camera intrinsic parameters to upgrade the projective reconstruction to Euclidean. Finally, a bundle adjustment algorithm optimizes the Euclidean reconstruction to give a statistically optimal result. This chain of algorithms is essentially the same as used in auto-calibration and the novelty of this paper is the inclusion of orthogonal scene plane constraints in each step. The algorithms involved are demonstrated on both simulated and real data showing the performance and usability of the proposed scheme

    Euclidean reconstruction from an image triplet: a sensitivity analysis

    Get PDF
    This paper studies the sensitivity in Euclidean reconstruction from an image triplet taken by an uncalibrated camera mounted on a robot arm. The idea of such a reconstruction is closely related to that proposed by Zisserman et al. (1995). In this paper, we focus on an intermediate step of the reconstruction procedure which requires estimating the screw axis that corresponds to the defective eigenvector of a 4×4 matrix. Hundreds of the conducted synthetic tests show that the algorithm is very sensitive to image noise and perturbations on camera motions and that if the matrix is perturbed by Gaussian noise then the reliability of the computed screw axis can be estimated

    Semi-automatic metric reconstruction of buildings from self-calibration: preliminary results on the evaluation of a linear camera self-calibration method

    Get PDF
    In this paper, we investigate the linear self-calibration method proposed by Newsam et al [7] for our project on 3D reconstruction of architectural buildings. Tins self-calibration method assumes that the principal point is known, the camera has square pixels and has no skew. It allows 3D shape to be reconstructed from two images while giving the camera the freedom to vary its focal length. Since the paper by Newsam et al reports only the theoretical work on camera self-calibration, in this paper, we evaluate the focal lengths obtained from their method with both synthetic data and real data. In real data where known 3D data are available, Tsai's calibration method is used for comparison. Our experimental results show that the focal lengths from the two methods differed by less than 5% and the reconstructed 3D shape was very good in that angles were well preserved. Future research will focus on improvement of 3D reconstruction in the presence of small image noise and further develop this method into a package for 3D reconstruction of buildings to be used by a layperson

    Calibrating a structured light stripe system: A novel approach

    No full text
    The problem associated with calibrating a structured light stripe system is that known world points on the calibration target do not normally fall onto every light stripe plane illuminated from the projector. We present in this paper a novel calibration method that employs the invariance of the cross ratio to overcome this problem. Using 4 known non-coplanar sets of 3 collinear world points and with no prior knowledge of the perspective projection matrix of the camera, we show that world points lying on each light stripe plane can be computed. Furthermore, by incorporating the homography between the light stripe and image planes, the 4 × 3 image-to-world transformation matrix for each stripe plane can also be recovered. The experiments conducted suggest that this novel calibration method is robust, economical, and is applicable to many dense shape reconstruction tasks

    Efficiency Analysis of Object Position and Orientation Detection Algorithms

    No full text

    Robust Instance Recognition in Presence of Occlusion and Clutter

    No full text

    Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors

    No full text
    Abstract. In this paper, we introduce a novel framework for automatically evaluating the quality of 3D tracking results obtained from markerless motion capturing. In our approach, we use additional inertial sensors to generate suitable reference information. In contrast to previously used marker-based evaluation schemes, inertial sensors are inexpensive, easy to operate, and impose comparatively weak additional constraints on the overall recording setup with regard to location, recording volume, and illumination. On the downside, acceleration and rate of turn data as obtained from such inertial systems turn out to be unsuitable representations for tracking evaluation. As our main contribution, we show how tracking results can be analyzed and evaluated on the basis of suitable limb orientations, which can be derived from 3D tracking results as well as from enhanced inertial sensors fixed on these limbs. Our experiments on various motion sequences of different complexity demonstrate that such limb orientations constitute a suitable mid-level representation for robustly detecting most of the tracking errors. In particular, our evaluation approach reveals also misconfigurations and twists of the limbs that can hardly be detected from traditional evaluation metrics.
    corecore