8 research outputs found

    Camera distortion self-calibration using the plumb-line constraint and minimal Hough entropy

    Full text link
    In this paper we present a simple and robust method for self-correction of camera distortion using single images of scenes which contain straight lines. Since the most common distortion can be modelled as radial distortion, we illustrate the method using the Harris radial distortion model, but the method is applicable to any distortion model. The method is based on transforming the edgels of the distorted image to a 1-D angular Hough space, and optimizing the distortion correction parameters which minimize the entropy of the corresponding normalized histogram. Properly corrected imagery will have fewer curved lines, and therefore less spread in Hough space. Since the method does not rely on any image structure beyond the existence of edgels sharing some common orientations and does not use edge fitting, it is applicable to a wide variety of image types. For instance, it can be applied equally well to images of texture with weak but dominant orientations, or images with strong vanishing points. Finally, the method is performed on both synthetic and real data revealing that it is particularly robust to noise.Comment: 9 pages, 5 figures Corrected errors in equation 1

    Determining image distortion and PBS (point of best symmetry) in digital images using straight line matrices

    Full text link
    [EN] It is impossible to take accurate measurements in photogrammetry without first removing the distortion in images. This paper presents a methodology for correcting radial and tangential distortion and for determining the PBS (Point of Best Symmetry) without knowledge of the interior orientation parameters (IOPs). An analytical plumb-line calibration method is used, measuring only the coordinates of points on straight lines, regardless of the position and direction of these lines within the image. Points belonging to multiple lines can also be used since the effects on their X and Y coordinates are calculated independently. The results obtained on an image of a common scene, taken with a handheld non-metric camera, show a high degree of accuracy even with a minimum number of observables. And its application on a calibrated grid for engineering purposes with a semi-metric camera, results optimal even using a single image. (C) 2016 Elsevier Ltd. All rights reserved.The authors wish to thank CITES Espana and Direccion General de Bienes Culturales y Ensenanzas Artisticas, de la Consejeria de Educacion, Cultura y Universidades de la Comunidad Autonoma de la Region de Murcia, Museo Nacional de Arqueologia Subacuatica. Financial support is gratefully acknowledged from Spanish "I + D + I MINECO" projects CTQ2011-28079-CO3-01 and 02 and CTQ2014-53736-C3-1-P supported by ERDEF funds. The authors also wish to thank Mr. Manuel Planes and Dr. Jose Luis Moya, technical supervisors of the Electron Microscopy Service of the Universitat Politecnica de Valencia.Herráez Boquera, J.; Denia Ríos, JL.; Navarro Esteve, PJ.; Rodríguez Pereña, J.; Martín Sánchez, MT. (2016). Determining image distortion and PBS (point of best symmetry) in digital images using straight line matrices. Measurement. 91:641-650. https://doi.org/10.1016/j.measurement.2016.05.051S6416509

    Алгоритми компенсації оптичних спотворень на цифрових зображеннях

    Get PDF
    До бакалаврської дипломної роботи Перцова Вадим Миколайовича. На тему: «Алгоритми компенсації оптичних спотворень на цифрових зображеннях» Дана дипломна робота присвячена методам компенсації оптичних спотворень на цифрових зображеннях. В роботі зроблено аналіз алгоритмів компенсації оптичних спотворень та визначення найбільш оптимальних методів компенсації для цифрових зображеннях. Аналіз проводиться в пакеті прикладних програм MATLAB.This thesis is devoted to algorithms for compensating optical distortion of digital images. The paper analyzes the methods of optical distortion compensation and determines the most optimal compensation methods for digital images. The analysis is performed in the MATLAB application package

    System Calibration between Non-metric Optical Camera and Range Camera

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 건설환경공학부, 2013. 8. 김용일.최근 실내 3차원 모델링은 다양한 분야에서 주목받고 있으며, 이에 따라 관련 연구에 대한 필요성이 증가하고 있다. 특히 이종 센서 영상간의 융합을 통한 실내 3차원 모델링 연구의 필요성이 대두되고 있다. 이때 이종 센서 영상간의 융합에 있어, 정밀한 시스템 캘리브레이션은 매우 필수적인 요소이다. 따라서 본 연구는 비측정용 광학 카메라와 레인지 카메라로 구성된 카메라 시스템에 대한 캘리브레이션을 수행하였다. 기존의 광학 카메라와 레인지 카메라간의 시스템 캘리브레이션에 대한 연구는 주로 특정 객체만을 3차원으로 구성하는 것에 연구가 집중되었다. 또한 시스템 캘리브레이션 방법 간의 비교 평가가 부족하였으며, 검정 대상지 설계가 미흡하였다는 한계가 존재한다. 이에 본 연구에서는 비측정용 광학 및 레인지 카메라의 특성을 고려하여 기존과 다른 검정 대상지를 새롭게 설계하였다. 또한 단사진 표정 및 블록 조정의 두 가지 방법을 통하여 시스템 캘리브레이션을 수행하여 상대표정요소를 도출하고, 그 결과에 대하여 비교 평가하였다. 결과적으로 상대표정요소간의 상관관계 및 표준편차를 낮추는 것이 정확도에 큰 영향을 주는 것을 확인하였으며, 블록 조정을 통하여 보다 신뢰도 높은 결과를 얻을 수 있음을 확인하였다. 본 연구에서는 이를 바탕으로 보다 효율적인 시스템 캘리브레이션 수행 방법 및 검정 대상지 설계와 영상 촬영 방법을 제안하였다.Recently, indoor 3D modeling has attracted attention in various fields, and the needs of its related research needs is increasing. Especially, indoor 3D modeling studies by using image fusion technique with different types of sensors are becoming a necessity. For a image fusion between two kinds of sensors, precise system calibration is essential. Therefore, system calibration was performed on the camera system consisting of non-metric optical camera and range camera in this study. Previous studies about system calibration between non-metric optical camera and range camera were mainly focused on constructing certain object in 3d model. And previous test-bed design was not sufficient for system calibration. In this study, test-bed for calibration was designed by considering the characteristics of non-metric optical camera and range camera. Also, relative orientation parameters were derived by performing a system calibration using single photo resection and block adjustment. As a result, it was confirmed that it is important to reduce correlation between relative orientation parameters and standard deviation to obtain result with high accuracy. Also, it was confirmed that through block-adjustment method to get more reliable results. Finally, a efficient way to perform system calibration, test-bed design and image shooting methods were proposed.1. 서론 1 1.1. 연구배경 및 동기 1 1.2. 연구동향 3 1.3. 연구의 목적 및 범위 5 2. 시스템 캘리브레이션 수학적 모델 7 2.1. 광학 카메라 영상의 수학적 모델식 7 2.2. 레인지카메라 영상의 수학적 모델식 8 2.2.1. 거리 관측값 8 2.2.2. 정오차(systematic error) 8 2.3. 단사진표정의 수학적 모델식 10 2.4. 블록조정에서의 수학적 모델식 13 3. 카메라의 특징 및 검정 대상지 구성 14 3.1. 광학 및 레인지 카메라의 제원 및 특징 14 3.1.1. 광학카메라 14 3.1.2. 레인지 카메라 16 3.2. 카메라 및 검정 대상지 좌표계 설정 19 3.3. 검정 대상지 구성 20 3.3.1. 광학카메라용 검정 대상지 구성 20 3.3.2. 레인지카메라용 검정 대상지 구성 21 4. 시스템 캘리브레이션 시뮬레이션 25 4.1. 시뮬레이션 진행 설계 및 순서 25 4.2. 시뮬레이션 검정 대상지 및 표정요소 결정 27 4.2.1. 검정 대상지 및 지상 기준점 설계 27 4.2.2. 내부표정요소 29 4.2.3. 외부표정요소 및 상대표정요소의 결정 31 4.3. 시뮬레이션 영상 제작 36 4.3.1. 광학 카메라 시뮬레이션 영상 제작 36 4.3.2. 레인지 카메라 시뮬레이션 영상 제작 39 4.4. 시뮬레이션 시스템 캘리브레이션 수행 43 4.4.1. 단사진 표정 시뮬레이션 43 4.4.2. 블록조정 시뮬레이션 45 5. 시스템 캘리브레이션 46 5.1. 시스템 캘리브레이션 진행 설계 및 순서 46 5.2. 검정 대상지 구현 47 5.2.1. 검정 대상지 프레임 47 5.2.2. 광학 카메라용 지상 기준점 48 5.2.3. 레인지 카메라용 지상 기준점 50 5.2.4. 지상 기준점 좌표 측정 55 5.3. 실영상 촬영 56 5.3.1. 카메라간의 위치 관계 56 5.3.2. 실영상 촬영 57 5.4. 시스템 캘리브레이션 수행 60 5.4.1. 내부표정요소 도출 60 5.4.2. 단사진 표정 61 5.4.3. 블록조정 61 6. 캘리브레이션 결과 및 평가 62 6.1. 시뮬레이션 캘리브레이션 결과 62 6.1.1. 단사진 표정을 통한 시스템 캘리브레이션 시뮬레이션 62 6.1.2. 블록 조정을 통한 시스템 캘리브레이션 75 6.1.3. 시뮬레이션 단사진 표정과 블록 조정 결과 비교 81 6.2. 실제 영상을 이용한 시스템 캘리브레이션 결과 83 6.2.1. 단사진 표정을 통한 시스템 캘리브레이션 83 6.2.2. 블록 조정을 통한 시스템 캘리브레이션 93 6.2.3. 단사진 표정 및 블록 조정 결과 비교 99 6.3. 시뮬레이션과 실제 시스템 캘리브레이션 비교 평가 101 7. 결론 103 참고문헌 107 부 록 118 A.1. 최소제곱법 118 A.2. 축차근사법 120 A.3. 공선조건식(collinearity condition) 121 A.4. 광학카메라 단사진 표정 모델식 및 행렬구성 125 A.5. 레인지카메라 단사진 표정 모델식 131 A.6. 블록조정 수학적 모델 135Maste

    A UAV-ENABLED CALIBRATION METHOD FOR REMOTE CAMERAS ROBUST TO LOCALIZATION UNCERTAINTY

    Get PDF
    Several video applications rely on camera calibration, a key enabler towards the measurement of metric parameters from images. For instance, monitoring environmental changes through remote cameras, such as glacier size changes, or measuring vehicle speed from security cameras, require cameras to be calibrated. Calibrating a camera is necessary to implement accurate computer vision techniques for the automated analysis of video footage. This automated analysis enables the ability to save cost and time in a variety of fields, such as manufacturing, civil engineering, architecture and safety. The large number of cameras installed and operated continues to increase. A vast portion of these cameras are ”hard-to-reach” cameras. ”Hard-to-reach” cameras refer to installed cameras that cannot be removed from their location without impacting the camera parameters or the camera’s operational use. This includes remote sensing cameras or security cameras. Many of these cameras are not calibrated, and successfully being able to calibrate them is a key need as applications continue growing for the use of automated measurements using the video provided by the cameras. Existing calibration methods can be divided into two groups: object-based calibration, which relies on the use of a calibration target of known dimensions, and self-calibration, which relies on the camera motion or scene geometry constraints. However, these methods have not been adapted for use with remote cameras that are hard-to-reach and have large field-of-views. Indeed, the object-based calibration method requires a tedious and manual process that is not adapted to a large field of view. Furthermore, the self-calibration requires restricted conditions to work correctly and is thus not scalable to a large type of hard-to-reach cameras, with many different parameters, and various viewing scenes. Based on this need, the research objective of this thesis is to develop a camera calibration method for hard-to-reach cameras. The method must satisfy a series of requirements caused by the remote status of the cameras being calibrated: • Be adapted to large fields-of-view since these cameras cannot be accessed easily (which prevents the use of object-based calibration techniques) • Be scalable to various environments (which is not feasible using self-calibration techniques that require strict assumptions about the scene) • Be automated to enable the calibration of the large number of already installed cameras • Be able to correct for the large non-linear distortion that is frequently present with these cameras In response to the calibration need, this thesis proposes a solution that relies on the use of a drone or a robot as a moving target to collect the 3D and 2D matching points required for the calibration. The target localization in the 3D space and on the image is subject to errors, and the approach must be tested to evaluate its ability to calibrate cameras despite measurement uncertainties. This work demonstrates the success of the calibration approach using realistic simulations and real-world testing. The approach is robust against localization uncertainties. It is also environment independent, and highly automated, on the contrary to existing calibration techniques. First, this work defines a drone trajectory that covers the entire field of view and enables a robust correspondence between 3D and 2D key points. The corresponding experiment evaluates the calibration quality while the 2D localization is subject to uncertainties. It demonstrates using simulations for several cameras that the use of the moving target following this trajectory enables the collection of a complete training set, and results in an accurate calibration with an RMS reprojection error of 3.2 pixels on average. This error is smaller than 3.6 pixels which is a threshold derived in this thesis, and which corresponds to an accurate calibration. Then, the drone design is modified to add a marker to improve the target detection accuracy. Experiment 2 demonstrates the robustness of this solution in challenging conditions, such as in complex environments for the target detection. The modified drone design leads to improvement in calibration accuracy with an RMS reprojection error of 2.4 pixels on average, and is adapted for detection despite backgrounds or flight conditions that introduce complication in the target detection. This research also develops a strategy to evaluate the impact of camera parameters, drone path parameters, and 3D and 2D localization uncertainties on the calibration accuracy. Applying this strategy to 5000 simulated camera models leads to recommendations for path parameters for the drone-based calibration approach and highlights the impact of camera parameters on the calibration accuracy. It demonstrates that specific sampling step lengths lead to a better calibration, and demonstrates the relationship between the drone-camera distance and the accuracy. This experiment results in recommendations for the drone path. It also evaluated the RMS reprojection error for the 5000 cameras. The average of this error is equal to 4 pixels. Linking this result to the speed measurement application, 4 pixels error corresponds to a speed measurement error smaller than 0.5km/h when measuring the speed of a vehicle 15 meters away using a pinhole camera of focal length 900 pixels. The knowledge gained from these experiments is applied in a real-world test, which completes the demonstration of the drone-based camera calibration approach. The real test is made using a commercial drone and GPS, in an urban environment and in a challenging background. This hardware experiment shows the steps to follow to reproduce the drone-based remote camera calibration technique. The calibration error equals 7.7 pixels, and can be reduced if a RTK GPS is used as 3D localization sensor. Finally, this work demonstrates using an optimization process for several simulated cameras that the sampling size can be reduced by more than half for a faster calibration while maintaining a good calibration accuracy.Ph.D
    corecore