9 research outputs found

    Numerical estimation of epipolar curves for omnidirectional sensors

    Get PDF
    The epipolar geometry of couples of omnidirectional sensors is often difficult to express analytically. We propose an algorithm to estimate numerically epipolar curves from omnidirectional pairs of stereovision. This algorithm is not limited to this type of sensors and works, for example, with a combination of a panoramic sensor and a traditional camera. Although the load of calculation necessary for this algorithm is heavy, it works with every kind of sensor (provided that the stereovision pair is completely calibrated) especially with sensor that do not respect the single viewpoint constraint.La géométrie épipolaire des paires de capteurs omnidirectionnels est souvent difficile à exprimer analytiquement. Nous proposons un algorithme pour estimer numériquement les courbes épipolaires des paires de capteurs omnidirectionnels. Cet algorithme n'est toutefois pas limité à ce type de capteur et fonctionne, par exemple, avec une combinaison d'un capteur panoramique et d'une caméra classique. Bien que la charge de calcul requise soit lourde, cet algorithme a l'avantage de fonctionner avec tous les types de capteurs, si la paire de capteurs est complètement calibrée (tous paramètres déterminés). En particulier l'algorithme est applicable pour les capteurs catadioptriques ne respectant pas la contrainte du foyer de projection unique

    Automatic multi-camera extrinsic parameter calibration based on pedestrian torsors

    Get PDF
    Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street

    Aerial panoramic image reconstruction for inspection and survey purposes

    Get PDF
    Debido al aumento de la demanda de aplicaciones relacionadas con la inspección y el reconocimiento en las que se precisa el uso de UAVs, la reconstrucción de imágenes panorámicas se ha convertido en un campo en el que actualmente se investiga muy activamente por los expertos en visión por computador. Por lo tanto, este proyecto tiene como objetivo el desarrollo de un algoritmo capaz de crear imágenes panorámicas, que sea capaz de hacerlo de tal manera que sea un algoritmo: lo suficientemente rápido para trabajar en tiempo real, con el cual se pierda la menor cantidad de información posible, que se pueda integrar fácilmente en el sistema del UAV y que se pueda aprovechar para incorporar en el futuro otras técnicas relacionadas con la detección de objetos o la odometría visual. Para cumplir con los objetivos de trabajo en tiempo real y mínima perdida de información, se propone un método de reconstrucción controlada, en el que se evalúa y se selecciona en todo momento las imágenes que van a formar parte de la panorámica. También, el algoritmo evalúa cuando no se puede continuar con la creación de la panorámica y se debe empezar con otra rápidamente sin perder información. Por último, para cumplir con el objetivo de fácil integración en el sistema, se propone el uso de la estructura ROS, que se basa en el intercambio de mensajes entre diferentes nodos (subsistemas).Due to the rising of application demand related to inspection and survey in which are required UAVs, panoramic image reconstruction has become a field where, currently, computer vision experts actively dive in it. Therefore, the objective of this project is to develop a algorithm cable to create panoramic images, so that it is a algorithm: fast enough to work in real time, in which lower possible data will be lost, simple to integrate within the whole UAV system and functional to include other techniques in the future such as object detection an visual odometry. To reach the objectives related to real time and lower data losing, it is proposed a method of checked reconstruction. In that method, a evaluation and a selection of the capture images is done constantly to find the best image to merge with the panorama. Besides, the algorithm decide the moment in that the reconstruction must stop and a when a new panoramic quicly must be create before losing information. Finally, to reach the objectives related to simple integration, it is proposed using the framework ROS that is based on exchanging messages between diferent nodes.Ingeniería Electrónica Industrial y Automátic

    3D inspection of wafer bump quality without explicit 3D reconstruction.

    Get PDF
    Zhao Yang.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 87-95).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Objectives of the Thesis --- p.1Chapter 1.2 --- Wafer bumping inspection by using Biplanar Disparity approach --- p.2Chapter 1.3 --- Thesis Outline --- p.4Chapter 2 --- Background --- p.5Chapter 2.1 --- What is wafer bump? --- p.5Chapter 2.1.1 --- Common defects of wafer bump --- p.6Chapter 2.1.2 --- Literature review on exist wafer bump inspection method --- p.11Chapter 3 --- Model 1: the one camera model-Homography approach --- p.21Chapter 3.1 --- The introduction of the theoretical base of model 1 --- p.21Chapter 3.1.1 --- The objective of model 1 --- p.21Chapter 3.1.2 --- Desires --- p.22Chapter 3.1.3 --- Some background knowledge on Homography --- p.22Chapter 3.2 --- "Model 1- ""Pseudo Homography"" Approach" --- p.24Chapter 3.2.1 --- The description of the configuration of model 1 --- p.24Chapter 3.2.2 --- The condition of pseudo Homography --- p.25Chapter 3.2.3 --- The formation of pseudo Homgraphy H --- p.26Chapter 3.3 --- Methodology of treatment of the answer set --- p.32Chapter 3.3.1 --- Singular Value Decomposition-SVD --- p.32Chapter 3.3.2 --- The Robust Estimation --- p.33Chapter 3.3.3 --- Some experimental results by using manmade Ping Pang balls to test SVD[31] and Robust Estimation [24] --- p.35Chapter 3.3.4 --- the measurement of the Homography matrix answer set --- p.37Chapter 3.4 --- Preliminary experiment about model 1 --- p.43Chapter 3.5 --- Problems unsolved --- p.47Chapter 4 --- Model 2: the two camera model-Biplanar Disparity approach --- p.48Chapter 4.1 --- Theoretical Background --- p.48Chapter 4.1.1 --- the linearization of Homography matrix changes --- p.49Chapter 4.1.2 --- Problem Nature --- p.51Chapter 4.1.3 --- Imaging system setup --- p.52Chapter 4.1.4 --- Camera Calibration[13] --- p.52Chapter 4.2 --- Methodology --- p.54Chapter 4.2.1 --- Invariance measure --- p.54Chapter 4.2.2 --- The Geometric meaning of the Biplanar Disparity matrix --- p.58Chapter 4.3 --- RANSAC-Random Sample Consensus --- p.64Chapter 4.3.1 --- finding Homography matrix by using RANSAC[72] [35] --- p.64Chapter 4.3.2 --- finding Fundamental matrix by using RANSAC[73] [34] --- p.65Chapter 4.4 --- Harris Corner detection --- p.65Chapter 5 --- Simulation and experimental results --- p.67Chapter 5.1 --- Simulation experiments --- p.67Chapter 5.1.1 --- Preliminary experiments --- p.67Chapter 5.1.2 --- Specification for the synthetic data system --- p.71Chapter 5.1.3 --- Allowed error in the experiment --- p.71Chapter 5.2 --- Real images experiments --- p.72Chapter 5.2.1 --- Experiment instrument --- p.72Chapter 5.2.2 --- The Inspection Procedure --- p.74Chapter 5.2.3 --- Images grabbed under above system --- p.75Chapter 5.2.4 --- Experimental Results --- p.81Chapter 6 --- CONCLUSION AND FUTURE WORKS --- p.83Chapter 6.1 --- Summary on the contribution of my work --- p.83Chapter 6.2 --- Some Weakness of The Method --- p.84Chapter 6.3 --- Future Works and Further Development --- p.84Chapter 6.3.1 --- About the synthetic experiment --- p.84Chapter 6.3.2 --- About the real image experiment --- p.85Bibliography --- p.8

    Structureless Camera Motion Estimation of Unordered Omnidirectional Images

    Get PDF
    This work aims at providing a novel camera motion estimation pipeline from large collections of unordered omnidirectional images. In oder to keep the pipeline as general and flexible as possible, cameras are modelled as unit spheres, allowing to incorporate any central camera type. For each camera an unprojection lookup is generated from intrinsics, which is called P2S-map (Pixel-to-Sphere-map), mapping pixels to their corresponding positions on the unit sphere. Consequently the camera geometry becomes independent of the underlying projection model. The pipeline also generates P2S-maps from world map projections with less distortion effects as they are known from cartography. Using P2S-maps from camera calibration and world map projection allows to convert omnidirectional camera images to an appropriate world map projection in oder to apply standard feature extraction and matching algorithms for data association. The proposed estimation pipeline combines the flexibility of SfM (Structure from Motion) - which handles unordered image collections - with the efficiency of PGO (Pose Graph Optimization), which is used as back-end in graph-based Visual SLAM (Simultaneous Localization and Mapping) approaches to optimize camera poses from large image sequences. SfM uses BA (Bundle Adjustment) to jointly optimize camera poses (motion) and 3d feature locations (structure), which becomes computationally expensive for large-scale scenarios. On the contrary PGO solves for camera poses (motion) from measured transformations between cameras, maintaining optimization managable. The proposed estimation algorithm combines both worlds. It obtains up-to-scale transformations between image pairs using two-view constraints, which are jointly scaled using trifocal constraints. A pose graph is generated from scaled two-view transformations and solved by PGO to obtain camera motion efficiently even for large image collections. Obtained results can be used as input data to provide initial pose estimates for further 3d reconstruction purposes e.g. to build a sparse structure from feature correspondences in an SfM or SLAM framework with further refinement via BA. The pipeline also incorporates fixed extrinsic constraints from multi-camera setups as well as depth information provided by RGBD sensors. The entire camera motion estimation pipeline does not need to generate a sparse 3d structure of the captured environment and thus is called SCME (Structureless Camera Motion Estimation).:1 Introduction 1.1 Motivation 1.1.1 Increasing Interest of Image-Based 3D Reconstruction 1.1.2 Underground Environments as Challenging Scenario 1.1.3 Improved Mobile Camera Systems for Full Omnidirectional Imaging 1.2 Issues 1.2.1 Directional versus Omnidirectional Image Acquisition 1.2.2 Structure from Motion versus Visual Simultaneous Localization and Mapping 1.3 Contribution 1.4 Structure of this Work 2 Related Work 2.1 Visual Simultaneous Localization and Mapping 2.1.1 Visual Odometry 2.1.2 Pose Graph Optimization 2.2 Structure from Motion 2.2.1 Bundle Adjustment 2.2.2 Structureless Bundle Adjustment 2.3 Corresponding Issues 2.4 Proposed Reconstruction Pipeline 3 Cameras and Pixel-to-Sphere Mappings with P2S-Maps 3.1 Types 3.2 Models 3.2.1 Unified Camera Model 3.2.2 Polynomal Camera Model 3.2.3 Spherical Camera Model 3.3 P2S-Maps - Mapping onto Unit Sphere via Lookup Table 3.3.1 Lookup Table as Color Image 3.3.2 Lookup Interpolation 3.3.3 Depth Data Conversion 4 Calibration 4.1 Overview of Proposed Calibration Pipeline 4.2 Target Detection 4.3 Intrinsic Calibration 4.3.1 Selected Examples 4.4 Extrinsic Calibration 4.4.1 3D-2D Pose Estimation 4.4.2 2D-2D Pose Estimation 4.4.3 Pose Optimization 4.4.4 Uncertainty Estimation 4.4.5 PoseGraph Representation 4.4.6 Bundle Adjustment 4.4.7 Selected Examples 5 Full Omnidirectional Image Projections 5.1 Panoramic Image Stitching 5.2 World Map Projections 5.3 World Map Projection Generator for P2S-Maps 5.4 Conversion between Projections based on P2S-Maps 5.4.1 Proposed Workflow 5.4.2 Data Storage Format 5.4.3 Real World Example 6 Relations between Two Camera Spheres 6.1 Forward and Backward Projection 6.2 Triangulation 6.2.1 Linear Least Squares Method 6.2.2 Alternative Midpoint Method 6.3 Epipolar Geometry 6.4 Transformation Recovery from Essential Matrix 6.4.1 Cheirality 6.4.2 Standard Procedure 6.4.3 Simplified Procedure 6.4.4 Improved Procedure 6.5 Two-View Estimation 6.5.1 Evaluation Strategy 6.5.2 Error Metric 6.5.3 Evaluation of Estimation Algorithms 6.5.4 Concluding Remarks 6.6 Two-View Optimization 6.6.1 Epipolar-Based Error Distances 6.6.2 Projection-Based Error Distances 6.6.3 Comparison between Error Distances 6.7 Two-View Translation Scaling 6.7.1 Linear Least Squares Estimation 6.7.2 Non-Linear Least Squares Optimization 6.7.3 Comparison between Initial and Optimized Scaling Factor 6.8 Homography to Identify Degeneracies 6.8.1 Homography for Spherical Cameras 6.8.2 Homography Estimation 6.8.3 Homography Optimization 6.8.4 Homography and Pure Rotation 6.8.5 Homography in Epipolar Geometry 7 Relations between Three Camera Spheres 7.1 Three View Geometry 7.2 Crossing Epipolar Planes Geometry 7.3 Trifocal Geometry 7.4 Relation between Trifocal, Three-View and Crossing Epipolar Planes 7.5 Translation Ratio between Up-To-Scale Two-View Transformations 7.5.1 Structureless Determination Approaches 7.5.2 Structure-Based Determination Approaches 7.5.3 Comparison between Proposed Approaches 8 Pose Graphs 8.1 Optimization Principle 8.2 Solvers 8.2.1 Additional Graph Solvers 8.2.2 False Loop Closure Detection 8.3 Pose Graph Generation 8.3.1 Generation of Synthetic Pose Graph Data 8.3.2 Optimization of Synthetic Pose Graph Data 9 Structureless Camera Motion Estimation 9.1 SCME Pipeline 9.2 Determination of Two-View Translation Scale Factors 9.3 Integration of Depth Data 9.4 Integration of Extrinsic Camera Constraints 10 Camera Motion Estimation Results 10.1 Directional Camera Images 10.2 Omnidirectional Camera Images 11 Conclusion 11.1 Summary 11.2 Outlook and Future Work Appendices A.1 Additional Extrinsic Calibration Results A.2 Linear Least Squares Scaling A.3 Proof Rank Deficiency A.4 Alternative Derivation Midpoint Method A.5 Simplification of Depth Calculation A.6 Relation between Epipolar and Circumferential Constraint A.7 Covariance Estimation A.8 Uncertainty Estimation from Epipolar Geometry A.9 Two-View Scaling Factor Estimation: Uncertainty Estimation A.10 Two-View Scaling Factor Optimization: Uncertainty Estimation A.11 Depth from Adjoining Two-View Geometries A.12 Alternative Three-View Derivation A.12.1 Second Derivation Approach A.12.2 Third Derivation Approach A.13 Relation between Trifocal Geometry and Alternative Midpoint Method A.14 Additional Pose Graph Generation Examples A.15 Pose Graph Solver Settings A.16 Additional Pose Graph Optimization Examples Bibliograph

    Cheirality in Epipolar Geometry

    No full text
    The image points in two images satisfy epipolar constraint. However, not all sets of points satisfying epipolar constraint correspond to any real geometry because there can exist no cameras and scene points projecting to given image points such that all image points have positive depth. Using the cheirality theory due to Hartley and previous work on oriented projective geometry, we give necessary and sufficient conditions for an image point set to correspond to any real geometry. For images from conventional cameras, this condition is simple and given in terms of epipolar lines and epipoles. Surprisingly, this is not sufficient for central panoramic cameras. Apart from giving the insight to epipolar geometry, among the applications are reducing the search space and ruling out impossible matches in stereo, and ruling out impossible solutions for a fundamental matrix computed from seven points. 1. Introduction It is a well-known fact that corresponding image points in two images of a 3..
    corecore