11 research outputs found

    Efficiency of Calibration of Distortion in UAV Vision Systems

    Get PDF
    The UAV vision system is used for various tasks, such as cartography, the construction of 3d object models, the creation of gigapixel shots, and for navigation. Many of these tasks require image stitching. An obstacle to high-quality image stitching is the presence of aberrations in the lens of the camera, especially the distortion. Therefore, elimination of this aberration will allow more accurately and with less computer expenses to perform the task of image stitchin

    Analysis of automotive camera sensor noise factors and impact on object detection

    Get PDF
    Assisted and automated driving functions are increasingly deployed to support improved safety, efficiency, and enhance driver experience. However, there are still key technical challenges that need to be overcome, such as the degradation of perception sensor data due to noise factors. The quality of data being generated by sensors can directly impact the planning and control of the vehicle, which can affect the vehicle safety. This work builds on a recently proposed framework, analysing noise factors on automotive LiDAR sensors, and deploys it to camera sensors, focusing on the specific disturbed sensor outputs via a detailed analysis and classification of automotive camera specific noise sources (30 noise factors are identified and classified in this work). Moreover, the noise factor analysis has identified two omnipresent and independent noise factors (i.e. obstruction and windshield distortion). These noise factors have been modelled to generate noisy camera data; their impact on the perception step, based on deep neural networks, has been evaluated when the noise factors are applied independently and simultaneously. It is demonstrated that the performance degradation from the combination of noise factors is not simply the accumulated performance degradation from each single factor, which raises the importance of including the simultaneous analysis of multiple noise factors. Thus, the framework can support and enhance the use of simulation for development and testing of automated vehicles through careful consideration of the noise factors affecting camera data

    Багатоканальна оптико-електронна система спостереження

    Get PDF
    Актуальність. Багатоканальні оптико-електроні системи мають широкий спектр застосування. Починаючі від військової сфери та розвідки закінчуючи аграрною сферою та робототехнікою. Багатоканальні оптико-електронні системи можуть використовуватись для наступних задача: • Отримання панорамних знімків місцевості, для військових і розвідки. • Створення карт полів та насаджень, для аграріїв. • Розширення поля зору до 180° і більше, в робототехніці. Існуючі багатоканальні системи націлені на військове застосування, та в основному мають великі габарити та значну ціну, що в свою чергу приводить до необхідності використовувати спеціальні засоби для експлуатації таких систем. Ці недоліки також звужують коло осіб та підприємств, що можуть використовувати дані системи. Враховуючі дані фактори, можна однозначно сказати, що покращення і розробка нових типів багатоканальних оптико-електронних систем є актуальною. Об'єкт дослідження. Процес формування цифрового зображення в багатоканальній оптико-електронній системі спостереження. Предмет дослідження. Інтелектуальна камера для безпілотного літального апарата. Ціль. Розробити модель інтелектуальної багатоканальної системи спостереження з малими габаритами для використання з безпілотними літальними апаратами.Topicality. Multi-channel optoelectronic systems have a wide range of applications. Starting from the military sphere and exploration ending with the agrarian sphere and robotics. Multichannel optical-electronic systems can be used for the following tasks: • Get panoramic images of the terrain, for military and intelligence. • Creation of field and plantation maps for farmers. • Expanding field of view to 180 ° or more in robotics. Existing multi-channel systems are targeted at military applications, and generally have large dimensions and considerable cost, which in turn necessitates the use of special facilities to operate such systems. These shortcomings also narrow the range of individuals and businesses that can use the data system. Given these factors, it can be said that the improvement and development of new types of multi-channel optoelectronic systems is relevant. Object of study. The process of forming a digital image in a multi-channel opto-electronic surveillance system. Subject of study. Intelligent camera for unmanned aerial vehicle. Goal. Develop a model of intelligent, small-size multi-channel surveillance system for use with unmanned aerial vehicles

    Enhancement to Camera Calibration: Representation, Robust Statistics, and 3D Calibration Tool

    Get PDF
    This thesis demonstrates theenhancement to camera calibrationin three aspects: representation of pose, robust statistics and 3D calibration tool. Camera calibration is the reconstruction of digital camera information based on digital images of an object in 3D space, since the digital images are 2D projections of a 3D object onto the camera sensor. Camera calibration is the estimation of the interior orientation (IO) parameters and exterior orientation (EO) parameters of a digital camera. Camera calibration is an essential part of image metrology. If the quality of camera calibration cannot be guaranteed, neither can the reliability of the subsequent analysis and applications based on digital images. The first enhancement of camera calibration is in representation of pose. A formal definition of singularity of representation is given mathematically. An example is offered to show how singularity can lead to difficulty or failure in optimization. The spherical coordinate system is introduced as a representation method instead of other widely-used representations. Thespherical coordinate systemrepresents camera poses according to camera calibration tool images in digital image processing. With the introduction of the v frame in digital images, the singularities of spherical coordinate system are demonstrated mathematically. The application ofrobust statisticsin optimization is the second enhancement of camera calibration. In photogrammetry, it is typical to collect thousands of observed data points for bundle adjustment. Unexpected outliers in observed data are unavoidable, and thus, the algorithm accuracy may not reach our goal. The least squares estimator is a widely used estimation method in camera calibration, but its sensitivity to outliers makes the algorithm unreliable, and it can even fail to fit the observations. By closely analyzing and comparing the characteristics of the least squares estimator, robust estimators with alternative assumptions are shown to detect and de-weight outliers that are not well processed with the classical assumptions, and provide a reliable fit to the observations. Among all possible robust estimators, two robust estimators from M-estimator family are applied to optimization in existing camera calibration algorithm. The robustified method can considerably improve accuracy for camera calibration estimation. Anew metric \bar{D}is introduced, which is the distance between two camera calibrations considering all of the estimated camera IO parameters. \bar{D} can be used to evaluate the performance among various estimators. After applying the robust estimator, the system improves the accuracy and performance in camera calibration up to 25\%. The influence of a robustified estimator modification is also considered. It is established that the modification has impact on the estimation accuracy. The third enhancement is the design and application of a3D calibration toolfor data collection. An all-new 3D calibration tool is designed to improve camera calibration accuracy over the 2D calibration tool. The comparison of the 3D and 2D calibration tools is conducted experimentally and theoretically. The experimental analysis is based on camera calibration results and the corresponding \bar{D} matrix, which shows that the 3D calibration tool improves accuracy. The mathematical analysis is based on the calculated covariance matrix of camera calibration without other impact factors. The experimental and theoretical analyses show that the 3D calibration tool can obtain more accurate calibration results compared with the 2D calibration tool, establishing that a carefully designed 3D calibration tool will yield better estimates than a 2D calibration tool

    Método local de correção da distorção da lente aplicado a visão estereoscópica

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2014.Visão estereoscópica é o processo de estimação da informação de profundidade de uma cena, ou de um objeto em particular, por meio da análise de duas imagens capturadas em diferentes pontos de vista, usando um modelo apropriado de câmera. As câmeras permitem uma rica representação da cena quando comparadas a outros sistemas, como laser, radar e sonar, sendo cada vez mais usadas em aplicações de robótica móvel e do setor automotivo, tais como navegação autônoma e detecção de objetos e obstáculos. Sistemas de visão estereoscópica são também utilizados em aplicações de metrologia e de sensoriamento remoto, e são caracterizados por três etapas principais: calibração, registro de imagens e reconstrução. A reconstrução da informação de profundidade em sistemas de visão estereoscópica é influenciada pela distorção das lentes. Neste trabalho, estuda-se o comportamento do erro de reconstrução em função do aumento da ordem do modelo de correção da distorção e propõe-se um novo método de correção da distorção de lentes, baseado na estimação de um conjunto de coeficientes de correção da distorção para cada região da imagem. A avaliação do sistema, feita por simulação com imagens sintéticas, indica que a aplicação do método proposto possibilita obter erro de reconstrução menor que o obtido pela aplicação do método convencional.Abstract : Stereo Vision is the process of recovery of three-dimensional information of a scene, or an object of scene, from the analysis of two bi-dimensional images by using an appropriate camera model. The cameras allow for a rich representation of the scene when compared to others types of sensors, such as laser and sonar, being used more and more in applications for mobile robotics and assistance driving, such as object and obstacle detection and localization. Stereo Vision is also used in remote sensing and Metrology, and is composed of three main steps: camera calibration, pixel correspondence and 3-D reconstruction. Lens distortion is one of the main factors that limits the accuracy of stereo vision system reconstruction. We propose a new method for correction of the lens distortion by applying compensation to each region of an image. Our method splits the image into smaller regions and compensates for each region for a fixed lenses model order. When compared to the conventional method, which models the entire image with only one model, our approach provides better compensation and reduce the depth error as show in the experiments with synthetic data

    A study on detection of risk factors of a toddler's fall injuries using visual dynamic motion cues

    Get PDF
    The research in this thesis is intended to aid caregivers’ supervision of toddlers to prevent accidental injuries, especially injuries due to falls in the home environment. There have been very few attempts to develop an automatic system to tackle young children’s accidents despite the fact that they are particularly vulnerable to home accidents and a caregiver cannot give continuous supervision. Vision-based analysis methods have been developed to recognise toddlers’ fall risk factors related to changes in their behaviour or environment. First of all, suggestions to prevent fall events of young children at home were collected from well-known organisations for child safety. A large number of fall records of toddlers who had sought treatment at a hospital were analysed to identify a toddler’s fall risk factors. The factors include clutter being a tripping or slipping hazard on the floor and a toddler moving around or climbing furniture or room structures. The major technical problem in detecting the risk factors is to classify foreground objects into human and non-human, and novel approaches have been proposed for the classification. Unlike most existing studies, which focus on human appearance such as skin colour for human detection, the approaches addressed in this thesis use cues related to dynamic motions. The first cue is based on the fact that there is relative motion between human body parts while typical indoor clutter does not have such parts with diverse motions. In addition, other motion cues are employed to differentiate a human from a pet since a pet also moves its parts diversely. They are angle changes of ellipse fitted to each object and history of its actual heights to capture the various posture changes and different body size of pets. The methods work well as long as foreground regions are correctly segmented.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore