101 research outputs found

    Structure from Motion with Higher-level Environment Representations

    Get PDF
    Computer vision is an important area focusing on understanding, extracting and using the information from vision-based sensor. It has many applications such as vision-based 3D reconstruction, simultaneous localization and mapping(SLAM) and data-driven understanding of the real world. Vision is a fundamental sensing modality in many different fields of application. While the traditional structure from motion mostly uses sparse point-based feature, this thesis aims to explore the possibility of using higher order feature representation. It starts with a joint work which uses straight line for feature representation and performs bundle adjustment with straight line parameterization. Then, we further try an even higher order representation where we use Bezier spline for parameterization. We start with a simple case where all contours are lying on the plane and uses Bezier splines to parametrize the curves in the background and optimize on both camera position and Bezier splines. For application, we present a complete end-to-end pipeline which produces meaningful dense 3D models from natural data of a 3D object: the target object is placed on a structured but unknown planar background that is modeled with splines. The data is captured using only a hand-held monocular camera. However, this application is limited to a planar scenario and we manage to push the parameterizations into real 3D. Following the potential of this idea, we introduce a more flexible higher-order extension of points that provide a general model for structural edges in the environment, no matter if straight or curved. Our model relies on linked B´ezier curves, the geometric intuition of which proves great benefits during parameter initialization and regularization. We present the first fully automatic pipeline that is able to generate spline-based representations without any human supervision. Besides a full graphical formulation of the problem, we introduce both geometric and photometric cues as well as higher-level concepts such overall curve visibility and viewing angle restrictions to automatically manage the correspondences in the graph. Results prove that curve-based structure from motion with splines is able to outperform state-of-the-art sparse feature-based methods, as well as to model curved edges in the environment

    Robust Multiple-View Geometry Estimation Based on GMM

    Get PDF
    Given three partially overlapping views of the scene from which a set of point or line correspondences have been extracted, 3D structure and camera motion parameters can be represented by the trifocal tensor, which is the key to many problems of computer vision on three views. Unlike in conventional typical methods, the residual value is the only rule to eliminate outliers with large value, we build a Gaussian mixture model assuming that the residuals corresponding to the inliers come from Gaussian distributions different from that of the residuals of outliers. Then Bayesian rule of minimal risk is employed to classify all the correspondences using the parameters computed from GMM. Experiments with both synthetic data and real images show that our method is more robust and precise than other typical methods because it can efficiently detect and delete the bad corresponding points, which include both bad locations and false matches

    EVALUATION OF THE METRIC TRIFOCAL TENSOR FOR RELATIVE THREE-VIEW ORIENTATION

    Get PDF
    In photogrammetry and computer vision the trifocal tensor is used to describe the geometric relation between projections of points in three views. In this paper we analyze the stability and accuracy of the metric trifocal tensor for calibrated cameras. Since a minimal parameterization of the metric trifocal tensor is challenging, the additional constraints of the interior orientation are applied to the well-known projective 6-point and 7-point algorithms for three images. The experimental results show that the linear 7-point algorithm fails for some noise-free degenerated cases, whereas the minimal 6-point algorithm seems to be competitive even with realistic noise

    Manufacturing Multiple View Constraints

    Get PDF
    In this paper we present an algorithm for the generation of the multiple view constraints for arbitrary configurations of cameras and image features correspondences. Multiple view constraints are an important commodity in computer vision since they facilitate in determining camera locations using only the correspondences between common features observed in sets of uncalibrated images. We show that by a series of counting arguments and a systematic application of the principles of antisymmetric algebra it is possible to generate arbitrary multiple view constraints in a completely automated fashion. The algorithm has already been utilized to discover new sets of multiple view constraints for surfaces

    Internationales Kolloquium über Anwendungen der Informatik und Mathematik in Architektur und Bauwesen : 20. bis 22.7. 2015, Bauhaus-Universität Weimar

    Get PDF
    The 20th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 20th till 22nd July 2015. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference. We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference

    Calibration of a trinocular system formed with wide angle lens cameras

    Full text link
    This paper was published in OPTICS EXPRESS and is made available as an electronic reprint with the permission of OSA. The paper can be found at the following URL on the OSA website: http://dx.doi.org/10.1364/OE.20.027691 . Systematic or multiple reproduction or distribution to multiple locations via electronic or other means is prohibited and is subject to penalties under lawTo obtain 3D information of large areas, wide angle lens cameras are used to reduce the number of cameras as much as possible. However, since images are high distorted, errors in point correspondences increase and 3D information could be erroneous. To increase the number of data from images and to improve the 3D information, trinocular sensors are used. In this paper a calibration method for a trinocular sensor formed with wide angle lens cameras is proposed. First pixels locations in the images are corrected using a set of constraints which define the image formation in a trinocular system. When pixels location are corrected, lens distortion and trifocal tensor is computed.This work was partially funded by the Universidad Politecnica de Valencia research funds (PAID 2010-2431 and PAID 10017), Generalitat Valenciana (GV/2011/057) and by Spanish government and the European Community under the project DPI2010-20814-C02-02 (FEDER-CICYT) and DPI2010-20286 (CICYT).Ricolfe Viala, C.; Sánchez Salmerón, AJ.; Valera Fernández, Á. (2012). Calibration of a trinocular system formed with wide angle lens cameras. Optics Express. 20(25):27691-27696. doi:10.1364/OE.20.027691S27691276962025Hartley, R. I. (1997). International Journal of Computer Vision, 22(2), 125-140. doi:10.1023/a:1007936012022Torr, P. H. ., & Zisserman, A. (1997). Robust parameterization and computation of the trifocal tensor. Image and Vision Computing, 15(8), 591-605. doi:10.1016/s0262-8856(97)00010-3Ricolfe-Viala, C., Sanchez-Salmeron, A.-J., & Martinez-Berti, E. (2011). Accurate calibration with highly distorted images. Applied Optics, 51(1), 89. doi:10.1364/ao.51.000089Ricolfe-Viala, C., & Sanchez-Salmeron, A.-J. (2010). Lens distortion models evaluation. Applied Optics, 49(30), 5914. doi:10.1364/ao.49.005914Ahmed, M., & Farag, A. (2005). Nonmetric calibration of camera lens distortion: differential methods and robust estimation. IEEE Transactions on Image Processing, 14(8), 1215-1230. doi:10.1109/tip.2005.846025Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine Vision and Applications, 13(1), 14-24. doi:10.1007/pl0001326

    Integral Geometric Dual Distributions of Multilinear Models

    Get PDF
    We propose an integral geometric approach for computing dual distributions for the parameter distributions of multilinear models. The dual distributions can be computed from, for example, the parameter distributions of conics, multiple view tensors, homographies, or as simple entities as points, lines, and planes. The dual distributions have analytical forms that follow from the asymptotic normality property of the maximum likelihood estimator and an application of integral transforms, fundamentally the generalised Radon transforms, on the probability density of the parameters. The approach allows us, for instance, to look at the uncertainty distributions in feature distributions, which are essentially tied to the distribution of training data, and helps us to derive conditional distributions for interesting variables and characterise confidence intervals of the estimates

    Metric 3D-reconstruction from Unordered and Uncalibrated Image Collections

    Get PDF
    In this thesis the problem of Structure from Motion (SfM) for uncalibrated and unordered image collections is considered. The proposed framework is an adaptation of the framework for calibrated SfM proposed by Olsson-Enqvist (2011) to the uncalibrated case. Olsson-Enqvist's framework consists of three main steps; pairwise relative rotation estimation, rotation averaging, and geometry estimation with known rotations. For this to work with uncalibrated images we also perform auto-calibration during the first step. There is a well-known degeneracy for pairwise auto-calibration which occurs when the two principal axes meet in a point. This is unfortunately common for real images. To mitigate this the rotation estimation is instead performed by estimating image triplets. For image triplets the degenerate congurations are less likely to occur in practice. This is followed by estimation of the pairs which did not get a successful relative rotation from the previous step. The framework is successfully applied to an uncalibrated and unordered collection of images of the cathedral in Lund. It is also applied to the well-known Oxford dinosaur sequence which consists of turntable motion. Image pairs from the turntable motion are in a degenerate conguration for auto-calibration since they both view the same point on the rotation axis
    corecore