5 research outputs found

    Self-localization in urban environment via mobile imaging facility.

    Get PDF
    Chim, Ho Ming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 58-62).Abstracts in English and Chinese.Acknowledgements --- p.iAbstract --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Objectives --- p.1Chapter 1.2 --- Motivations --- p.1Chapter 1.3 --- Problem Statement --- p.2Chapter 1.4 --- Camera Self-Localization Approaches --- p.3Chapter 1.4.1 --- Based on Calibration Patterns --- p.3Chapter 1.4.2 --- Based on Self-calibration --- p.3Chapter 1.4.3 --- Based on Shape and Motion --- p.4Chapter 1.4.4 --- The Proposed Approach - Based on Junctions --- p.5Chapter 1.5 --- Thesis Organization --- p.6Chapter 2 --- Previous Work --- p.7Chapter 2.1 --- Camera Self-Localization --- p.7Chapter 2.1.1 --- Parallel Plane Features --- p.7Chapter 2.1.2 --- Parallelepiped Features --- p.8Chapter 2.1.3 --- Single View Geometric Features --- p.8Chapter 2.1.4 --- Shape and Motion --- p.8Chapter 2.1.5 --- Other Estimation Methods --- p.9Chapter 2.2 --- Feature Correspondences Establishment --- p.9Chapter 2.2.1 --- Feature-based Object Recognition --- p.9Chapter 2.2.2 --- Model-based Object Recognition --- p.10Chapter 3 --- Preliminaries --- p.11Chapter 3.1 --- Perspective Camera Model --- p.11Chapter 3.2 --- Camera Pose from Point Correspondences --- p.15Chapter 3.3 --- Camera Pose from Direction Correspondences --- p.16Chapter 4 --- A Junction-based Approach --- p.18Chapter 4.1 --- Use of Junction Correspondences for Determining Camera Pose --- p.18Chapter 4.1.1 --- Constraints from Point Information --- p.19Chapter 4.1.2 --- Constraint from Direction Information --- p.21Chapter 4.1.3 --- Junction Triplet Correspondences --- p.22Chapter 4.2 --- Extraction of Junctions and Junction Triplets from Image --- p.24Chapter 4.2.1 --- Handling Image Data --- p.24Chapter 4.2.2 --- Bridging Lines --- p.25Chapter 4.2.3 --- """L""-junctions" --- p.26Chapter 4.2.4 --- """Y"" and ""Adjunctions" --- p.27Chapter 4.2.5 --- Junction Triplets --- p.28Chapter 4.3 --- Establishment of the First Junction Triplet Correspondence --- p.30Chapter 4.3.1 --- Ordered Junction Triplets from Model --- p.30Chapter 4.3.2 --- A Junction Hashing Scheme --- p.31Chapter 4.4 --- Establishment of Points Correspondence --- p.33Chapter 4.4.1 --- Viewing Sphere Tessellation --- p.33Chapter 4.4.2 --- Model Views Synthesizing --- p.35Chapter 4.4.3 --- Affine Coordinates Computation --- p.35Chapter 4.4.4 --- Hash Table Filling --- p.38Chapter 4.4.5 --- Hash Table Voting --- p.38Chapter 4.4.6 --- Hypothesis and Confirmation --- p.39Chapter 4.4.7 --- An Example of Geometric Hashing --- p.40Chapter 5 --- Experimental Results --- p.43Chapter 5.1 --- Results from Synthetic Image Data --- p.43Chapter 5.2 --- Results from Real Image Data --- p.45Chapter 5.2.1 --- Results on Laboratory Scenes --- p.46Chapter 5.2.2 --- Results on Outdoor Scenes --- p.48Chapter 6 --- Conclusion --- p.51Chapter 6.1 --- Contributions --- p.51Chapter 6.2 --- Advantages --- p.52Chapter 6.3 --- Summary and Future Work --- p.52Chapter A --- Least-Squares Method --- p.54Chapter B --- RQ Decomposition --- p.56Bibliography --- p.5

    Sistema de posicionamento robotizado segundo o conceito da indúsria 4.0

    Get PDF
    Trabalho final de mestrado para obtenção do grau de mestre em Engenharia MecânicaCom a evolução da automação industrial, existe a necessidade de automatizar até os processos mais simples, de forma a manter as empresas industriais competitivas. O objectivo deste Trabalho Final de Mestrado é automatizar o processo de posicionamento do sistema de lavagem interior dos tanques dos camiões-cisterna, com supervisão a partir da Internet. Neste trabalho de projecto desenvolveu-se um sistema de posicionamento robotizado com controlo de posição através da imagem de câmaras de vídeo. O algoritmo de visão foi programado no software Matlab com auxílio de bibliotecas de visão computacional e de comunicação com um controlador de motores de passo desenvolvido em arquitectura de Arduino. O desenvolvimento foi feito dentro do conceito da indústria 4.0, ou seja, todo o sistema é totalmente automático com excepção da introdução do número de bocas pelo operador. Foram testados controladores com comunicação à base do protocolo Ethernet, nomeadamente o PC com o cliente do Matlab a comunicar com um servidor Apache para que seja possível estabelecer a conexão entre o sistema e a supervisão na Internet. Todo o sistema pode ser supervisionado através de um smartphone ou tablet a partir do browser instalado. Construiu-se também um protótipo à escala, seleccionado de modo a validar o algoritmo de visão e as simulações efectuadas em Matlab. O protótipo é constituído por uma estrutura em perfis V-slot, motores de passo, o controlador e câmara USB, que embora tenha uma menor qualidade de imagem, foi suficiente para validar o algoritmo de visão implementado.With the evolution of industrial automation, there is a need to automate even the simplest processes, to keep industrial companies competitive. The objective of this Master’s Thesis is to automate the positioning system of an internal washing process of tanker trucks, with supervision from the Web. In this project work, a robot positioning system was developed with position control through the image of video cameras. The vision algorithm was programmed in Matlab software with the help of computer vision toolboxes and communication with a stepper motor controller developed in Arduino architecture. The development was done within the concept of the industry 4.0, that is, the whole system is totally automatic except for the introduction of the number of holes by the operator. Controllers were tested with communication based on the Ethernet protocol, namely the PC with the Matlab client to communicate with an Apache server so that it is possible to establish the connection between the system and the supervision on the web. The entire system can be supervised through a smartphone or tablet from the installed browser. A prototype was also built to scale, selected to validate the vision algorithm and the simulations performed in Matlab. The prototype consists of a structure in V-slot profiles, as well as the controller and USB camera that although having a lower quality of image, was enough to validate the implemented vision algorithm.N/

    Using Geometric Constraints Through Parallelepipeds for Calibration and 3D Modeling

    No full text
    Abstract — This paper concerns the incorporation of geometric information in camera calibration and 3D modeling. Using geometric constraints enables stabler results and allows to perform tasks with fewer images. Our approach is motivated and developed within a framework of semi-automatic 3D modeling, where the user defines geometric primitives and constraints between them. It is based on the observation that constraints such as coplanarity, parallelism or orthogonality, are often embedded intuitively in parallelepipeds. Moreover, parallelepipeds are easy to delineate by a user, and are well adapted to model the main structure of e.g. architectural scenes. In this paper, first a duality that exists between the shape parameters of a parallelepiped and the intrinsic parameters of a camera is described. Then, a factorization-based algorithm exploiting this relation is developed. Using images of parallelepipeds, it allows to simultaneously calibrate cameras, recover shapes of parallelepipeds and estimate the relative pose of all entities. Besides geometric constraints expressed via parallelepipeds, our approach takes simultaneously into account the usual self-calibration constraints on cameras. The proposed algorithm is completed by a study of the singular cases of the calibration method. A complete method for the reconstruction of scene primitives that are not modeled by parallelepipeds is also briefly described. The proposed methods are validated by various experiments with real and simulated data, for single-view as well as multi-view cases. Index Terms — 3D modeling, calibration, geometric constraints. I

    Using geometric constraints through parallelepipeds for calibration and 3D modeling

    No full text

    Vanishing Point Detection By Segment Clustering On The Projective Space

    No full text
    The analysis of vanishing points on digital images provides strong cues for inferring the 3D structure of the depicted scene and can be exploited in a variety of computer vision applications. In this paper, we propose a method for estimating vanishing points in images of architectural environments that can be used for camera calibration and pose estimation, important tasks in large-scale 3D reconstruction. Our method performs automatic segment clustering in projective space - a direct transformation from the image space - instead of the traditional bounded accumulator space. Since it works in projective space, it handles finite and infinite vanishing points, without any special condition or threshold tuning. Experiments on real images show the effectiveness of the proposed method. We identify three orthogonal vanishing points and compute the estimation error based on their relation with the Image of the Absolute Conic (IAC) and based on the computation of the camera focal length. © 2012 Springer-Verlag.6554 LNCSPART 2324337Zeng, X., Wang, Q., Xu, J., MAP Model for Large-scale 3D Reconstruction and Coarse Matching for Unordered Wide-baseline Photos British Machine Vision Conference (2008)Jang, K.H., Jung, S.K., Practical modeling technique for large-scale 3D building models from ground images (2009) Pattern Recognition Letters, 30 (10), pp. 861-869Lee, S.C., Jung, S.K., Nevatia, R., Automatic Integration of Facade Textures into 3D Building Models with a Projective Geometry Based Line Clustering (2002) USC Computer VisionTeller, S., Antone, M., Bodnar, Z., Bosse, M., Coorg, S., Jethwa, M., Master, N., Calibrated, Registered Images of an Extended Urban Area (2003) International Journal of Computer Vision, 53 (1), pp. 93-107Wilczkowiak, M., Sturm, P., Boyer, E., Using Geometric Constraints through Parallelepipeds for Calibration and 3D Modeling (2005) IEEE Transactions on Pattern Analysis and Machine Intelligence, 27 (2), pp. 194-207Wang, G., Tsui, H.-T., Hu, Z., Wu, F., Camera calibration and 3D reconstruction from a single view based on scene constraints (2005) Image and Vision Computing, 23 (3), pp. 311-323Wang, G., Tsu, H.-T., Wu, Q.M.J., What can we learn about the scene structure from three orthogonal vanishing points in images (2009) Pattern Recognition Letters, 30 (3), pp. 192-202Canny, J., A computational approach to edge detection (1986) IEEE Transactions on Pattern Analysis and Machine Intelligence, 8 (6), pp. 679-698Duda, R.O., Hart, P.E., Use of the Hough transformation to detect lines and curves in pictures (1972) Communications of the ACM, 15 (1), pp. 11-15Barnard, S.T., Interpreting perspective images (1983) Artificial Intelligence, 21 (4), pp. 435-462Tuytelaars, T., Van Gool, L.J., Proesmans, M., Moons, T., A Cascaded Hough Transform as an Aid in Aerial Image Interpretation (1998) International Conference on Computer Vision, pp. 67-72Shufelt, J.A., Performance Evaluation and Analysis of Vanishing Point Detection Techniques (1999) IEEE Transactions on Pattern Analysis and Machine Intelligence, 21 (3), pp. 282-288Almansa, A., Desolneux, A., Vamech, S., Vanishing Point Detection without Any A Priori Information (2003) IEEE Transactions on Pattern Analysis and Machine Intelligence, 25 (4), pp. 502-507Rother, C., A New Approach for Vanishing Point Detection in Architectural Environments British Machine Vision Conference (2000)McLean, G.F., Kotturi, D., Vanishing Point Detection by Line Clustering (1995) IEEE Transactions on Pattern Analysis and Machine Intelligence, 17 (11), pp. 1090-1095Tardif, J.-P., Non-Iterative Approach for Fast and Accurate Vanishing Point Detection (2009) International Conference on Computer Vision, pp. 1250-1257Desolneux, A., Moisan, L., Morel, J.-M., Edge Detection by Helmholtz Principle (2001) Journal of Mathematical Imaging and Vision, 14 (3), pp. 271-284Stolfi, J., (1991) Oriented Projective Geometry: A Framework for Geometric Computations, , Academic PressMardia, K.V., Jupp, P.E., (1999) Directional Statistics, , John Wiley and SonsDenis, P., Elder, J.H., Estrada, F.J., Efficient Edge-Based Methods for Estimating Manhattan Frames in Urban Imagery (2008) LNCS, 5303, pp. 197-210. , Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. Springer, Heidelber
    corecore