10,317 research outputs found

    Pattern Matching Analysis of Electron Backscatter Diffraction Patterns for Pattern Centre, Crystal Orientation and Absolute Elastic Strain Determination: Accuracy and Precision Assessment

    Full text link
    Pattern matching between target electron backscatter patterns (EBSPs) and dynamically simulated EBSPs was used to determine the pattern centre (PC) and crystal orientation, using a global optimisation algorithm. Systematic analysis of error and precision with this approach was carried out using dynamically simulated target EBSPs with known PC positions and orientations. Results showed that the error in determining the PC and orientation was < 105^{-5} of pattern width and < 0.01{\deg} respectively for the undistorted full resolution images (956x956 pixels). The introduction of noise, optical distortion and image binning was shown to have some influence on the error although better angular resolution was achieved with the pattern matching than using conventional Hough transform-based analysis. The accuracy of PC determination for the experimental case was explored using the High Resolution (HR-) EBSD method but using dynamically simulated EBSP as the reference pattern. This was demonstrated through a sample rotation experiment and strain analysis around an indent in interstitial free steel

    Parameterized Synthetic Image Data Set for Fisheye Lens

    Full text link
    Based on different projection geometry, a fisheye image can be presented as a parameterized non-rectilinear image. Deep neural networks(DNN) is one of the solutions to extract parameters for fisheye image feature description. However, a large number of images are required for training a reasonable prediction model for DNN. In this paper, we propose to extend the scale of the training dataset using parameterized synthetic images. It effectively boosts the diversity of images and avoids the data scale limitation. To simulate different viewing angles and distances, we adopt controllable parameterized projection processes on transformation. The reliability of the proposed method is proved by testing images captured by our fisheye camera. The synthetic dataset is the first dataset that is able to extend to a big scale labeled fisheye image dataset. It is accessible via: http://www2.leuphana.de/misl/fisheye-data-set/.Comment: 2018 5th International Conference on Information Science and Control Engineerin

    LOW-COST STRUCTURED-LIGHT 3D CAPTURE SYSTEM DESIGN

    Get PDF
    To date, three-dimensional measurement is a very important and popular topic in computer vision. Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In our research, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640×480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated and that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object

    Affine multi-view modelling for close range object measurement

    Get PDF
    In photogrammetry, sensor modelling with 3D point estimation is a fundamental topic of research. Perspective frame cameras offer the mathematical basis for close range modelling approaches. The norm is to employ robust bundle adjustments for simultaneous parameter estimation and 3D object measurement. In 2D to 3D modelling strategies image resolution, scale, sampling and geometric distortion are prior factors. Non-conventional image geometries that implement uncalibrated cameras are established in computer vision approaches; these aim for fast solutions at the expense of precision. The projective camera is defined in homogeneous terms and linear algorithms are employed. An attractive sensor model disembodied from projective distortions is the affine. Affine modelling has been studied in the contexts of geometry recovery, feature detection and texturing in vision, however multi-view approaches for precise object measurement are not yet widely available. This project investigates affine multi-view modelling from a photogrammetric standpoint. A new affine bundle adjustment system has been developed for point-based data observed in close range image networks. The system allows calibration, orientation and 3D point estimation. It is processed as a least squares solution with high redundancy providing statistical analysis. Starting values are recovered from a combination of implicit perspective and explicit affine approaches. System development focuses on retrieval of orientation parameters, 3D point coordinates and internal calibration with definition of system datum, sensor scale and radial lens distortion. Algorithm development is supported with method description by simulation. Initialization and implementation are evaluated with the statistical indicators, algorithm convergence and correlation of parameters. Object space is assessed with evaluation of the 3D point correlation coefficients and error ellipsoids. Sensor scale is checked with comparison of camera systems utilizing quality and accuracy metrics. For independent method evaluation, testing is implemented over a perspective bundle adjustment tool with similar indicators. Test datasets are initialized from precise reference image networks. Real affine image networks are acquired with an optical system (~1M pixel CCD cameras with 0.16x telecentric lens). Analysis of tests ascertains that the affine method results in an RMS image misclosure at a sub-pixel level and precisions of a few tenths of microns in object space

    Impacto da calibração num LiDAR baseado em visão estereoscópica

    Get PDF
    Every year 1.3 million people die due to road accidents. Given that the main culprit is human error, autonomous driving is the path to avert and prevent these numbers. An autonomous vehicle must be able to perceive its surroundings, therefore requiring vision sensors. Of the many kinds of vision sensors available, the three main automotive vision sensors are cameras, RADAR and LiDAR. LiDARs have the unique capability of capturing a high-resolution point cloud, thus enabling 3D object detection. However, current LiDAR technology is still immature and expensive, which makes it unattractive to the automotive market. We propose an alternative LiDAR concept – the LiDART – that is able to generate a point cloud simply resorting to stereoscopic vision and dot projection. LiDART takes advantage of mass-produced components such as a dot pattern projector and a stereoscopic camera rig, thus inherently overcoming problems in cost and maturity. Nonetheless, LiDART has four key challenges: noise, correspondence, centroiding and calibration. This thesis focuses on the calibration aspects of LiDART and aims to investigate the systematic error introduced by standard calibration techniques. In this work, the quality of stereoscopic calibration was assessed both experimentally and numerically. The experimental validation consisted in assembling a prototype and calibrating it using standard calibration techniques for stereoscopic vision. Calibration quality was assessed by estimating the distance to a target. As for numerical assessment, a simulation tool was developed to cross-validate most experimental results. The obtained results show that standard calibration techniques result in a considerable systematic error, reaching 30% of the correct distance. Nonetheless, the estimated error depends monotonically on distance. Consequently, the systematic error can be significantly reduced if better calibration methods, specifically designed for the application at hand, are used in the future.Todos os anos 1.3 milhões de pessoas perdem a vida devido a acidentes de viação. Dado que a principal razão por detrás destes trágicos números é o erro humano, o caminho para prevenir perder tantas vidas passa pela condução autónoma. Um veículo autónomo deve ser capaz de observar o cenário envolvente. Para tal, são necessários sensores de visão. Dos vários sensores de visão disponiveis no mercado, os três principais sensores de visão automotivos são a câmara, o RADAR e o Li- DAR. O LiDAR tem a capacidade única de capturar uma nuvem de pontos com alta resolução, permitindo assim deteção de objetos em 3D. Contudo, a tecnologia por detrás de um LiDAR é atualmente dispendiosa e imatura, o que tem dificultado a adoção por parte de fabricantes de automóveis. Este trabalho propõe um conceito de LiDAR alternativo – o LiDART – capaz de gerar uma nuvem de pontos recorrendo simplesmente a visão estereoscópica e à projeção de pontos. O LiDART tem a vantagem de se basear em componentes produzidos em massa, tais como um projector de pontos e uma câmara estereoscópica, ultrapassando assim os problemas de custo e maturidade. Não obstante, o LiDART tem quatro desafios principais: ruído, correspondência, estimação de centróide e calibração. Esta tese foca-se nas características de calibração do LiDART, tendo como objectivo investigar o erro sistemático introduzido por técnicas de calibração comuns. A qualidade da calibração foi avaliada experimentalmente e numericamente. A validação experimental consistiu em montar um protótipo e calibrá-lo de várias maneiras. A qualidade da calibração foi então avaliada através da estimação da distância a um alvo. Relativamente à parte numérica, desenvolveu-se uma ferramenta de simulação para validar grande parte dos resultados experimentais. Os resultados obtidos mostram que técnicas de calibração comuns resultam num erro sistemático considerável, chegando a 30% da distância correta. Porém, o erro de estimação varia monotonicamente com a distância. Consequentemente, o erro sistemático pode ser reduzido significativamente se melhores métodos de calibração, especialmente pensados para a aplicação em questão, forem aplicados no futuro.Mestrado em Engenharia Eletrónica e Telecomunicaçõe
    corecore