8 research outputs found

    Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

    Get PDF
    Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorith

    Sensitivity analysis in a camera-LiDAR calibration model

    Get PDF
    Recientemente, la fusión de datos entre una cámara y un sensor de profundidad del tipo LiDAR se ha convertido en un problema de gran interés en la industria y en la ingeniería. La calidad de los modelos 3D producidos depende, en buena manera, de un proceso correcto de calibración entre ambos sensores. En este artículo, se realiza un análisis de sensibilidad en un modelo de calibración cámara-LiDAR. Se ha calculado individualmente la variabilidad de cada parámetro por el método de Sobol, basado en la técnica de ANOVA, y el método FAST, que se basa en el análisis de Fourier. Se han definido los parámetros más sensibles y con mayor tendencia a introducir errores en nuestra plataforma de reconstrucción. Se han simulado múltiples conjuntos de parámetros para su análisis y comparación utilizando los métodos de Monte Carlo e Hipercubo Latino. Se muestran estadísticas sobre la sensibilidad total y global de cada parámetro. Además, se presentan resultados sobre la relación de sensibilidad en la calibración cámara-LiDAR, el costo computacional, el tiempo de simulación, la discrepancia y la homogeneidad en los datos simulados.Recently the data fusion between a camera and a depth sensor of LiDAR type, has become an issue of major concern in industry and engineering. The quality of the delivered 3D models depends greatly on a proper calibration between sensors. This paper presents a sensitivity analysis in a camera-lidar calibration model. The variability of each parameter was calculated individually by the Sobol method, based on ANOVA technique, and the FAST method, which is based on Fourier analysis. Multiple sets of parameters were simulated using Monte Carlo and Latin Hypercube methods for the purpose of comparing the results of the sensitivity analysis. We defined which parameters are the most sensitive and prone to introduce error into our reconstruction platform. Statistics for the total and global sensibility analysis for each sensor and for each parameter are presented. Furthermore, results on the sensitivity ratio on camera-LiDAR calibration, computational cost, time simulation, discrepancy and homogeneity in the simulated data are presented.Peer Reviewe

    LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System

    Full text link
    Collision avoidance is a critical task in many applications, such as ADAS (advanced driver-assistance systems), industrial automation and robotics. In an industrial automation setting, certain areas should be off limits to an automated vehicle for protection of people and high-valued assets. These areas can be quarantined by mapping (e.g., GPS) or via beacons that delineate a no-entry area. We propose a delineation method where the industrial vehicle utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to detect passive beacons and model-predictive control to stop the vehicle from entering a restricted space. The beacons are standard orange traffic cones with a highly reflective vertical pole attached. The LiDAR can readily detect these beacons, but suffers from false positives due to other reflective surfaces such as worker safety vests. Herein, we put forth a method for reducing false positive detection from the LiDAR by projecting the beacons in the camera imagery via a deep learning method and validating the detection using a neural network-learned projection from the camera to the LiDAR space. Experimental data collected at Mississippi State University's Center for Advanced Vehicular Systems (CAVS) shows the effectiveness of the proposed system in keeping the true detection while mitigating false positives.Comment: 34 page

    Geometric model and calibration method for a solid-state LiDAR

    Get PDF
    This paper presents a novel calibration method for solid-state LiDAR devices based on a geometrical description of their scanning system, which has variable angular resolution. Determining this distortion across the entire Field-of-View of the system yields accurate and precise measurements which enable it to be combined with other sensors. On the one hand, the geometrical model is formulated using the well-known Snell’s law and the intrinsic optical assembly of the system, whereas on the other hand the proposed method describes the scanned scenario with an intuitive camera-like approach relating pixel locations with scanning directions. Simulations and experimental results show that the model fits with real devices and the calibration procedure accurately maps their variant resolution so undistorted representations of the observed scenario can be provided. Thus, the calibration method proposed during this work is applicable and valid for existing scanning systems improving their precision and accuracy in an order of magnitude.Peer ReviewedPostprint (published version

    Virtual 3D reconstruction of complex urban environments

    Full text link
    [ES] Este trabajo presenta una metodología para la generación de modelos tridimensionales de entornos urbanos. Se utiliza una plataforma terrestre multi-sensores compuesta por un LIDAR, una cámara esférica, GPS y otros sistemas inerciales. Los datos de los sensores están sincronizados con el sistema de navegación y georrefenciados. La metodología de digitalizaciónn se centra en 3 procesos principales. (1) La reconstrucción tridimensional, en el cual se elimina el ruido en los datos 3D y se disminuye la distorsión en las imágenes. Posteriormente se construye una imagen panorámica. (2) La texturización, se describe a detalle el algoritmo para asegurar la menor incertidumbre en el proceso de extracción de color. (3) La generación de mallas, se describe el proceso de mallado basado en octree’s, desde la generación de la semilla, el teselado, así como la eliminación de huecos en las mallas. Por último, se realiza una evaluación cuantitativa de la propuesta y se compara con otros enfoques existen[EN] This paper presents a methodology for the generation of three-dimensional models of urban environments. A multi-sensor terrestrial platform composed of a LIDAR, a spherical camera, GPS and IMU systems is used. The data of the sensors are synchronized with the navigation system and georeferenced. The digitalization methodology is focused on 3 main processes. (1) The three-dimensional reconstruction, in which the noise in the 3D data is eliminated and the distortion in the images is reduced. Later, a panoramic image is built. (2) Texturing, the algorithm is described in detail to ensure the least uncertainty in this color extraction process. (3) Mesh generation, the meshing process based on octree’s is described, from the generation of the seed, the tessellation, as well as the elimination of gaps in the meshes. Finally, a quantitative evaluation of our proposal is made and compared with other existing approaches in the state-of-the-art. The results obtained are discussed in detail.García-Moreno, A.; González-Barbosa, J. (2020). Reconstrucción virtual tridimensional de entornos urbanos complejos. Revista Iberoamericana de Automática e Informática industrial. 17(1):22-33. https://doi.org/10.4995/riai.2019.11203OJS2233171Bernard O Abayowa, Alper Yilmaz, and Russell C Hardie. Automatic registration of optical aerial imagery to a lidar point cloud for generation of city models. ISPRS Journal of Photogrammetry and Remote Sensing, 106:68-81, 2015. https://doi.org/10.1016/j.isprsjprs.2015.05.006Gerardo Atanacio-Jiménez, José-Joel González-Barbosa, Juan B Hurtado-Ramos, Francisco J Ornelas-Rodríguez, Hugo Jiménez-Hernández, Teresa García-Ramirez, and Ricardo González-Barbosa. Lidar velodyne hdl-64e calibration using pattern planes. International Journal on Advanced Robotics Systems, 8(5):70-82, 2011. https://doi.org/10.5772/50900Matthew Brown, Richard Szeliski, and Simon Winder. Multi-image matching using multi-scale oriented patches. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 510-517. IEEE, 2005.Jonathan C Carr, Richard K Beatson, Jon B Cherrie, Tim J Mitchell, W Richard Fright, Bruce C McCallum, and Tim R Evans. Reconstruction and representation of 3d objects with radial basis functions. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 67-76. ACM, 2001.Ke Chen, Weisheng Lu, Fan Xue, Pingbo Tang, and Ling Hin Li. Automatic building information model reconstruction in high-density urban areas: Augmenting multi-source data with architectural knowledge. Automation in Construction, 93:22-34, 2018. https://doi.org/10.1016/j.autcon.2018.05.009Tamal K Dey and Samrat Goswami. Provable surface reconstruction from noisy samples. In Proceedings of the twentieth annual symposium on Computational geometry, pages 330-339. ACM, 2004.Luca Di Angelo, Paolo Di Stefano, and Luigi Giaccari. A new mesh-growing algorithm for fast surface reconstruction. Computer-Aided Design, 43(6): 639-650, 2011. https://doi.org/10.1016/j.cad.2011.02.012Julie Digne. An analysis and implementation of a parallel ball pivoting algorithm. Image Processing On Line, 4:149-168, 2014. https://doi.org/10.5201/ipol.2014.81Damien Garcia. Robust smoothing of gridded data in one and higher dimensions with missing values. Computational statistics & data analysis, 54(4):1167-1178, 2010. https://doi.org/10.1016/j.csda.2009.09.020Angel-Iván García-Moreno, José-Joel Gonzalez-Barbosa, Francisco-Javier Ornelas-Rodriguez, Juan B Hurtado-Ramos, and Marco-Neri Primo-Fuentes. Lidar and panoramic camera extrinsic calibration approach using a pattern plane. In Pattern Recognition. Springer, 2013. https://doi.org/10.1007/978-3-642-38989-4_11Angel-Iván García-Moreno, Denis-Eduardo Hernandez-García, José-Joel Gonzalez-Barbosa, Alfonso Ramírez-Pedraza, Juan B Hurtado-Ramos, and Francisco-Javier Ornelas-Rodriguez. Error propagation and uncertainty analysis between 3d laser scanner and camera. Robotics and Autonomous Systems, 62(6):782-793, 2014. https://doi.org/10.1016/j.robot.2014.02.004Angel-Iván García-Moreno, José-Joel González-Barbosa, Alfonso Ramírez-Pedraza, Juan B Hurtado-Ramos, and Francisco-Javier Ornelas-Rodriguez. Accurate evaluation of sensitivity for calibration between a lidar and a panoramic camera used for remote sensing. Journal of Applied Remote Sensing, 10(2):024002-024002, 2016. https://doi.org/10.1117/1.JRS.10.024002Jianwei Guo, Dong-Ming Yan, Li Chen, Xiaopeng Zhang, Oliver Deussen, and Peter Wonka. Tetrahedral meshing via maximal poisson-disk sampling. Computer Aided Geometric Design, 43:186-199, 2016. https://doi.org/10.1016/j.cagd.2016.02.004Rostam Affendi Hamzah, A Fauzan Kadmin, M Saad Hamid, S Fakhar A Ghani, and Haidi Ibrahim. Improvement of stereo matching algorithm for 3d surface reconstruction. Signal Processing: Image Communication, 65:165-172, 2018. https://doi.org/10.1016/j.image.2018.04.001Chris Harris. Geometry from visual motion. In Active vision, pages 263-284. MIT Press, 1993.C. Hatger and C. Brenner. Extraction of road geometry parameters from laser scanning and existing databases. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 34(3/W13):225-230, 2003.Dorota Iwaszczuk and Uwe Stilla. Camera pose refinement by matching uncertain 3d building models with thermal infrared image sequences for high quality texture extraction. ISPRS Journal of Photogrammetry and Remote Sensing, 132:33-47, 2017. https://doi.org/10.1016/j.isprsjprs.2017.08.006Hansung Kim and Adrian Hilton. Block world reconstruction from spherical stereo image pairs. Computer Vision and Image Understanding, 139:104-121, 2015. https://doi.org/10.1016/j.cviu.2015.04.001Eyal Kushilevitz, Rafail Ostrovsky, and Yuval Rabani. Efficient search for approximate nearest neighbor in high dimensional spaces. SIAM Journal on Computing, 30(2):457-474, 2000. https://doi.org/10.1137/S0097539798347177Maxime Lhuillier. Surface reconstruction from a sparse point cloud by enforcing visibility consistency and topology constraints. Computer Vision and Image Understanding, 175:52-71, 2018. https://doi.org/10.1016/j.cviu.2018.09.007Lingyun Liu and Ioannis Stamos. A systematic approach for 2d-image to 3drange registration in urban environments. Computer Vision and Image Understanding, 116(1):25-37, 2012. https://doi.org/10.1016/j.cviu.2011.07.009Jules Morel, Alexandra Bac, and Cédric Véga. Surface reconstruction of incomplete datasets: A novel poisson surface approach based on csrbf. Computers & Graphics, 74:44-55, 2018. https://doi.org/10.1016/j.cag.2018.05.004Gaurav Pandey, James R McBride, and Ryan M Eustice. Ford campus vision and lidar data set. The International Journal of Robotics Research, 30(13): 1543-1552, 2011. https://doi.org/10.1177/0278364911400640Gaurav Pandey, James R McBride, Silvio Savarese, and Ryan M Eustice. Automatic extrinsic calibration of vision and lidar by maximizing mutual information. Journal of Field Robotics, 2014. https://doi.org/10.1002/rob.21542Yun Shi, Shunping Ji, Xiaowei Shao, Peng Yang, Wenbin Wu, Zhongchao Shi, and Ryosuke Shibasaki. Fusion of a panoramic camera and 2d laser scanner data for constrained bundle adjustment in gps-denied environments. Image and Vision Computing, 40:28-37, 2015. https://doi.org/10.1016/j.imavis.2015.06.002Miao Wang and Yi-Hsing Tseng. Automatic segmentation of lidar data into coplanar point clusters using an octree-based split-and-merge algorithm. Photogrammetric Engineering & Remote Sensing, 76(4):407-420, 2010. https://doi.org/10.14358/PERS.76.4.407Ruisheng Wang, Jeff Bach, Jane Macfarlane, and Frank P Ferrie. A new upsampling method for mobile lidar data. In Applications of Computer Vision (WACV), 2012 IEEE Workshop on, pages 17-24. IEEE, 2012. https://doi.org/10.1109/WACV.2012.6162998Bin Wu, Bailang Yu, Qiusheng Wu, Shenjun Yao, Feng Zhao, Weiqing Mao, and Jianping Wu. A graph-based approach for 3d building model reconstruction from airborne lidar point clouds. Remote Sensing, 9(1):92, 2017. https://doi.org/10.3390/rs9010092Lin Yang, Yehua Sheng, and Bo Wang. 3d reconstruction of building façade with fused data of terrestrial lidar data and optical image. Optik-International Journal for Light and Electron Optics, 127(4):2165-2168, 2016. https://doi.org/10.1016/j.ijleo.2015.11.147Michael Ying Yang, Yanpeng Cao, and John McDonald. Fusion of camera images and laser scans for wide baseline 3d scene alignment in urban environments. ISPRS Journal of Photogrammetry and Remote Sensing, 66(6): S52-S61, 2011. https://doi.org/10.1016/j.isprsjprs.2011.09.004Cheng Yi, Yuan Zhang, Qiaoyun Wu, Yabin Xu, Oussama Remil, Mingqiang Wei, and JunWang. Urban building reconstruction from raw lidar point data. Computer-Aided Design, 93:1-14, 2017. https://doi.org/10.1016/j.cad.2017.07.005Fanyang Zeng and Ruofei Zhong. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud. In IOP Conference Series: Earth and Environmental Science, volume 17, page 012160. IOP Publishing, 2014. https://doi.org/10.1088/1755-1315/17/1/012160SM Iman Zolanvari, Debra F Laefer, and Atteyeh S Natanzi. Three-dimensional building fac¸ade segmentation and opening area detection from point clouds. ISPRS journal of photogrammetry and remote sensing, 143:134-149, 2018. https://doi.org/10.1016/j.isprsjprs.2018.04.00
    corecore