1,962 research outputs found

    Reconstrução tridimensional de ambientes usando LIDAR e câmara

    Get PDF
    Tridimensional reconstruction is still a challenging area, that has multiple application in architecture and robotics. Several technologies are used today, like Stereoscopy or Structured Light, however, none is able to achieve precise geometric results, which are usually required. A technology, LiDAR, has evolved as the de facto technology for tridimensional reconstruction, being able to achieve unmatched results. Yet, this technology is unable to register the color of objects, so the usual solution is to use a camera for this. Therefore, in this work, a set of algorithms and techniques for tridimensional reconstruction with a LiDAR laser scanner and a camera was developed. Moreover, a 3D scanner was developed to register real-word scenes. In particular, an innovative calibration method was developed to calibrate the laser scanner, which performed above a similar calibration method. Finally, the reconstruction process was tested with real data. The geometric reconstruction was very accurate but the color reconstruction was not, especially because of the poor calibration of the camera.Reconstrução tridimensional é uma área com múltiplas aplicações em arquitetura e robótica. Inúmera tecnologias existem para este efeito, como por exemplo a estereoscopia e luz estruturada. Contudo, muitas tecnologias carecem de precisão geométrica, que é por vezes um requisito. Outra tecnologia - LiDAR - é usada por causa dos seus resultados geométricos inigualáveis. No entanto, LiDAR é incapaz de capturar a cor dos objetos e uma solução é a integração de uma câmara com o LiDAR. Assim, neste trabalho foram desenvolvidos um conjunto de técnicas e algoritmos direcionados para a reconstrução 3D com LiDAR e câmara. Além disso, um scanner 3D foi desenvolvido para registrar cenas reais. Em particular, um método de calibração inovador foi desenvolvido para a calibração do laser, com precisão superior a um método semelhante. Finalmente, os métodos foram testados com dados de cenas reais. A reconstrução geométrica foi bem sucedida mas o registo de cor ficou aquém do que era esperado, por causa de uma calibração pouco precisa da câmara.Mestrado em Engenharia Mecânic

    Calibration of structured light system using unidirectional fringe patterns

    Get PDF
    3D shape measurement has a variety of applications in many areas, such as manufacturing, design, medicine and entertainment. There are many technologies that were successfully implemented in the past decades to measure three dimensional information of an object. The measurement techniques can be broadly classified into contact and non-contact measurement methods. One of the most widely used contact method is Coordinate Measuring Machine (CMM) which dates back to late 1950s. The method by far is one of the most accurate method as it can have sub-micrometer accuracy. But it becomes difficult to use this technique for soft objects as the probe might deform the surface of the object being measured. Also the scanning could be a time-consuming process. In order to address the problems in contact methods, non-contact methods such as time of flight (TOF), triangulation based laser scanner techniques, depth from defocus and stereo vision were invented. The main limitation with the time of flight laser scanner is that it does not give a high depth resolution. On the other hand, triangulation based laser scanning method scans the object line by line which might be time consuming. The depth from defocus method obtains 3D information of the object by relating depth to defocus blur analysis. However, it is difficult to capture the 3D geometry of objects that does not have a rich texture. The stereo vision system imitates human vision. It uses two cameras for capturing pictures of the object from different angles. The 3D coordinate information is obtained using triangulation. The main limitation with this technology is: when the object has a uniform texture, it becomes difficult to find corresponding pairs between the two cameras. Therefore, the structured light system (SLS) was introduced to address the above mentioned limitations. SLS is an extension of stereo vision system with one of the cameras being replaced by a projector. The pre-designed structured patterns are projected on to the object using a video projector. The main advantage with this system is that it does not use the object\u27s texture for identifying the corresponding pairs. But the patterns have to be coded in a certain way so that the camera-projector correspondence can be established. There are many codifications techniques such as pseudo-random codification, binary and N-ary codification. Pseudo-random codification uses laser speckles or structure-coded speckle patterns that vary in both the directions. However, the resolution is limited because each coded structure occupies multiple pixels in order to be unique. On the other hand, binary codifications projects a sequence of binary patterns. The main advantage with such a codification is that it is robust to noise as only two intensity levels are used (0s and 255). However, the resolution is limited because the width of the narrowest coding stripe should be more than the pixel size. Moreover, it takes many images to encode a scene that occupies a large number of pixels. To address this, N-ary codification makes use of multiple intensity levels between 0 and 255. Therefore the total number of coded patterns can be reduced. The main limitation is that the intensity-ratio analysis may be subject to noise. Digital Fringe Projection (DFP) system was developed to address the limitations of binary and N-ary codifications. In DFP computer generated sinusoidal patterns are projected on to the object and then the camera captures the distorted patterns from another angle. The main advantage of this method is that it is robust to the noise, ambient light and reflectivity as phase information is used instead of intensity. Albeit the merit of using phase, to achieve highly accurate 3D geometric reconstruction, it is also of crucial importance to calibrate the camera-projector system. Unlike the camera calibration, the projector calibration is difficult. This is mainly because the projector cannot capture images like a camera. Early attempts were made to calibrate the camera-projector system using a reference plane. The object geometry was reconstructed by comparing the phase difference between the object and the reference plane. However, the chosen reference plane needs to simultaneously possess a high planarity and a good optical property, which is typically difficult to achieve. Also, such calibration may be inaccurate if non-telecentric lenses are used. Calibration of the projector can also be done by treating it as the inverse of a camera. This method addressed the limitations of reference plane based method, as the exact intrinsic and extrinsic parameters of the imaging lenses are obtained. So a perfect reference plane is no longer required. The calibration method typically requires projecting orthogonal patterns on to the object. However, this method of calibration can be used only for structured light system with video projector. Grating slits and interferometers cannot be calibrated by this method as we cannot produce orthogonal patterns with such systems. In this research we have introduced a novel calibration method which uses patterns only in a single direction. We have theoretically proved that there exists one degree-of-freedom of redundancy in the conventional calibration methods, thus making it possible to use unidirectional patterns instead of orthogonal fringe patterns. Experiments show that under a measurement range of 200mm x 150mm x 120mm, our measurement results are comparable to the results obtained using conventional calibration method. Evaluated by repeatedly measuring a sphere with 147.726 mm diameter, our measurement accuracy on average can be as high as 0.20 mm with a standard deviation of 0.12 mm

    Automatic Extrinsic Self-Calibration of Mobile Mapping Systems Based on Geometric 3D Features

    Get PDF
    Mobile Mapping is an efficient technology to acquire spatial data of the environment. The spatial data is fundamental for applications in crisis management, civil engineering or autonomous driving. The extrinsic calibration of the Mobile Mapping System is a decisive factor that affects the quality of the spatial data. Many existing extrinsic calibration approaches require the use of artificial targets in a time-consuming calibration procedure. Moreover, they are usually designed for a specific combination of sensors and are, thus, not universally applicable. We introduce a novel extrinsic self-calibration algorithm, which is fully automatic and completely data-driven. The fundamental assumption of the self-calibration is that the calibration parameters are estimated the best when the derived point cloud represents the real physical circumstances the best. The cost function we use to evaluate this is based on geometric features which rely on the 3D structure tensor derived from the local neighborhood of each point. We compare different cost functions based on geometric features and a cost function based on the Rényi quadratic entropy to evaluate the suitability for the self-calibration. Furthermore, we perform tests of the self-calibration on synthetic and two different real datasets. The real datasets differ in terms of the environment, the scale and the utilized sensors. We show that the self-calibration is able to extrinsically calibrate Mobile Mapping Systems with different combinations of mapping and pose estimation sensors such as a 2D laser scanner to a Motion Capture System and a 3D laser scanner to a stereo camera and ORB-SLAM2. For the first dataset, the parameters estimated by our self-calibration lead to a more accurate point cloud than two comparative approaches. For the second dataset, which has been acquired via a vehicle-based mobile mapping, our self-calibration achieves comparable results to a manually refined reference calibration, while it is universally applicable and fully automated

    Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

    Get PDF
    Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorith

    3D object reconstruction using computer vision : reconstruction and characterization applications for external human anatomical structures

    Get PDF
    Tese de doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 201

    Progress in industrial photogrammetry by means of markerless solutions

    Get PDF
    174 p.La siguiente tesis está enfocada al desarrollo y uso avanzado de metodologías fotogramétrica sin dianas en aplicaciones industriales. La fotogrametría es una técnica de medición óptica 3D que engloba múltiples configuraciones y aproximaciones. En este estudio se han desarrollado procedimientos de medición, modelos y estrategias de procesamiento de imagen que van más allá que la fotogrametría convencional y buscan el emplear soluciones de otros campos de la visión artificial en aplicaciones industriales. Mientras que la fotogrametría industrial requiere emplear dianas artificiales para definir los puntos o elementos de interés, esta tesis contempla la reducción e incluso la eliminación de las dianas tanto pasivas como activas como alternativas prácticas. La mayoría de los sistemas de medida utilizan las dianas tanto para definir los puntos de control, relacionar las distintas perspectivas, obtener precisión, así como para automatizar las medidas. Aunque en muchas situaciones el empleo de dianas no sea restrictivo existen aplicaciones industriales donde su empleo condiciona y restringe considerablemente los procedimientos de medida empleados en la inspección. Un claro ejemplo es la verificación y control de calidad de piezas seriadas, o la medición y seguimiento de elementos prismáticos relacionados con un sistema de referencia determinado. Es en este punto donde la fotogrametría sin dianas puede combinarse o complementarse con soluciones tradicionales para tratar de mejorar las prestaciones actuales

    Sensor fusion in driving assistance systems

    Get PDF
    Mención Internacional en el título de doctorLa vida diaria en los países desarrollados y en vías de desarrollo depende en gran medida del transporte urbano y en carretera. Esta actividad supone un coste importante para sus usuarios activos y pasivos en términos de polución y accidentes, muy habitualmente debidos al factor humano. Los nuevos desarrollos en seguridad y asistencia a la conducción, llamados Advanced Driving Assistance Systems (ADAS), buscan mejorar la seguridad en el transporte, y a medio plazo, llegar a la conducción autónoma. Los ADAS, al igual que la conducción humana, están basados en sensores que proporcionan información acerca del entorno, y la fiabilidad de los sensores es crucial para las aplicaciones ADAS al igual que las capacidades sensoriales lo son para la conducción humana. Una de las formas de aumentar la fiabilidad de los sensores es el uso de la Fusión Sensorial, desarrollando nuevas estrategias para el modelado del entorno de conducción gracias al uso de diversos sensores, y obteniendo una información mejorada a partid de los datos disponibles. La presente tesis pretende ofrecer una solución novedosa para la detección y clasificación de obstáculos en aplicaciones de automoción, usando fusión vii sensorial con dos sensores ampliamente disponibles en el mercado: la cámara de espectro visible y el escáner láser. Cámaras y láseres son sensores comúnmente usados en la literatura científica, cada vez más accesibles y listos para ser empleados en aplicaciones reales. La solución propuesta permite la detección y clasificación de algunos de los obstáculos comúnmente presentes en la vía, como son ciclistas y peatones. En esta tesis se han explorado novedosos enfoques para la detección y clasificación, desde la clasificación empleando clusters de nubes de puntos obtenidas desde el escáner láser, hasta las técnicas de domain adaptation para la creación de bases de datos de imágenes sintéticas, pasando por la extracción inteligente de clusters y la detección y eliminación del suelo en nubes de puntos.Life in developed and developing countries is highly dependent on road and urban motor transport. This activity involves a high cost for its active and passive users in terms of pollution and accidents, which are largely attributable to the human factor. New developments in safety and driving assistance, called Advanced Driving Assistance Systems (ADAS), are intended to improve security in transportation, and, in the mid-term, lead to autonomous driving. ADAS, like the human driving, are based on sensors, which provide information about the environment, and sensors’ reliability is crucial for ADAS applications in the same way the sensing abilities are crucial for human driving. One of the ways to improve reliability for sensors is the use of Sensor Fusion, developing novel strategies for environment modeling with the help of several sensors and obtaining an enhanced information from the combination of the available data. The present thesis is intended to offer a novel solution for obstacle detection and classification in automotive applications using sensor fusion with two highly available sensors in the market: visible spectrum camera and laser scanner. Cameras and lasers are commonly used sensors in the scientific literature, increasingly affordable and ready to be deployed in real world applications. The solution proposed provides obstacle detection and classification for some obstacles commonly present in the road, such as pedestrians and bicycles. Novel approaches for detection and classification have been explored in this thesis, from point cloud clustering classification for laser scanner, to domain adaptation techniques for synthetic dataset creation, and including intelligent clustering extraction and ground detection and removal from point clouds.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Cristina Olaverri Monreal.- Secretario: Arturo de la Escalera Hueso.- Vocal: José Eugenio Naranjo Hernánde

    Impact of different trajectories on extrinsic self-calibration for vehicle-based mobile laser scanning systems

    Get PDF
    The trend toward further integration of automotive electronic control units functionality into domain control units as well as the rise of computing-intensive driver assistance systems has led to a demand for high-performance automotive computation platforms. These platforms have to fulfill stringent safety requirements. One promising approach is the use of performance computation units in combination with safety controllers in a single control unit. Such systems require adequate communication links between the computation units. While Ethernet is widely used, a high-speed serial link communication protocol supported by an Infineon AURIX safety controller appears to be a promising alternative. In this paper, a high-speed serial link IP core is presented, which enables this type of high-speed serial link communication interface for field-programmable gate array–based computing units. In our test setup, the IP core was implemented in a high-performance Xilinx Zynq UltraScale+, which communicated with an Infineon AURIX via high-speed serial link and Ethernet. The first bandwidth measurements demonstrated that high-speed serial link is an interesting candidate for inter-chip communication, resulting in bandwidths reaching up to 127 Mbit/s using stream transmissions
    corecore