54 research outputs found

    Multi-environment Georeferencing of RGB-D Panoramic Images from Portable Mobile Mapping – a Perspective for Infrastructure Management

    Get PDF
    Hochaufgelöste, genau georeferenzierte RGB-D-Bilder sind die Grundlage für 3D-Bildräume bzw. 3D Street-View-Webdienste, welche bereits kommerziell für das Infrastrukturmanagement eingesetzt werden. MMS ermöglichen eine schnelle und effiziente Datenerfassung von Infrastrukturen. Die meisten im Aussenraum eingesetzten MMS beruhen auf direkter Georeferenzierung. Diese ermöglicht in offenen Bereichen absolute Genauigkeiten im Zentimeterbereich. Bei GNSS-Abschattung fällt die Genauigkeit der direkten Georeferenzierung jedoch schnell in den Dezimeter- oder sogar in den Meterbereich. In Innenräumen eingesetzte MMS basieren hingegen meist auf SLAM. Die meisten SLAM-Algorithmen wurden jedoch für niedrige Latenzzeiten und für Echtzeitleistung optimiert und nehmen daher Abstriche bei der Genauigkeit, der Kartenqualität und der maximalen Ausdehnung in Kauf. Das Ziel dieser Arbeit ist, hochaufgelöste RGB-D-Bilder in verschiedenen Umgebungen zu erfassen und diese genau und zuverlässig zu georeferenzieren. Für die Datenerfassung wurde ein leistungsstarkes, bildfokussiertes und rucksackgetragenes MMS entwickelt. Dieses besteht aus einer Mehrkopf-Panoramakamera, zwei Multi-Beam LiDAR-Scannern und einer GNSS- und IMU-kombinierten Navigationseinheit der taktischen Leistungsklasse. Alle Sensoren sind präzise synchronisiert und ermöglichen Zugriff auf die Rohdaten. Das Gesamtsystem wurde in Testfeldern mit bündelblockbasierten sowie merkmalsbasierten Methoden kalibriert, was eine Voraussetzung für die Integration kinematischer Sensordaten darstellt. Für eine genaue und zuverlässige Georeferenzierung in verschiedenen Umgebungen wurde ein mehrstufiger Georeferenzierungsansatz entwickelt, welcher verschiedene Sensordaten und Georeferenzierungsmethoden vereint. Direkte und LiDAR SLAM-basierte Georeferenzierung liefern Initialposen für die nachträgliche bildbasierte Georeferenzierung mittels erweiterter SfM-Pipeline. Die bildbasierte Georeferenzierung führt zu einer präzisen aber spärlichen Trajektorie, welche sich für die Georeferenzierung von Bildern eignet. Um eine dichte Trajektorie zu erhalten, die sich auch für die Georeferenzierung von LiDAR-Daten eignet, wurde die direkte Georeferenzierung mit Posen der bildbasierten Georeferenzierung gestützt. Umfassende Leistungsuntersuchungen in drei weiträumigen anspruchsvollen Testgebieten zeigen die Möglichkeiten und Grenzen unseres Georeferenzierungsansatzes. Die drei Testgebiete im Stadtzentrum, im Wald und im Gebäude repräsentieren reale Bedingungen mit eingeschränktem GNSS-Empfang, schlechter Beleuchtung, sich bewegenden Objekten und sich wiederholenden geometrischen Mustern. Die bildbasierte Georeferenzierung erzielte die besten Genauigkeiten, wobei die mittlere Präzision im Bereich von 5 mm bis 7 mm lag. Die absolute Genauigkeit betrug 85 mm bis 131 mm, was einer Verbesserung um Faktor 2 bis 7 gegenüber der direkten und LiDAR SLAM-basierten Georeferenzierung entspricht. Die direkte Georeferenzierung mit CUPT-Stützung von Bildposen der bildbasierten Georeferenzierung, führte zu einer leicht verschlechterten mittleren Präzision im Bereich von 13 mm bis 16 mm, wobei sich die mittlere absolute Genauigkeit nicht signifikant von der bildbasierten Georeferenzierung unterschied. Die in herausfordernden Umgebungen erzielten Genauigkeiten bestätigen frühere Untersuchungen unter optimalen Bedingungen und liegen in derselben Grössenordnung wie die Resultate anderer Forschungsgruppen. Sie können für die Erstellung von Street-View-Services in herausfordernden Umgebungen für das Infrastrukturmanagement verwendet werden. Genau und zuverlässig georeferenzierte RGB-D-Bilder haben ein grosses Potenzial für zukünftige visuelle Lokalisierungs- und AR-Anwendungen

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Localization of Autonomous Vehicles in Urban Environments

    Full text link
    The future of applications such as last-mile delivery, infrastructure inspection and surveillance bets big on employing small autonomous drones and ground robots in cluttered urban settings where precise positioning is critical. However, when navigating close to buildings, GPS-based localisation of robotic platforms is noisy due to obscured reception and multi-path reflection. Localisation methods using introspective sensors like monocular and stereo cameras mounted on the platforms offer a better alternative as they are suitable for both indoor and outdoor operations. However, the inherent drift in the estimated trajectory is often evident in the 7 degrees of freedom that captures scaling, rotation and translation motion, and needs to be corrected. The theme of the thesis is to use a pre-existing 3D model to supplement the pose estimation from a visual navigation system, reducing incremental drift and thereby improving localisation accuracy. The novel framework developed for the monocular camera first extracts the geometric relationship between the pixels of the calibrated camera and the 3D points on the model. These geometric constraints, when used in addition to the relative pose constraints typically used in Simultaneous Localisation and Mapping (SLAM) algorithms, provide superior trajectory estimation. Further, scale drift correction is proposed using a novel SIM3SIM_3 optimisation procedure and successfully demonstrated using a unique dataset that embodies many urban localisation challenges. Techniques developed for Stereo camera localisation aligns the textured 3D stereo scans with respect to a 3D model and estimates the associated camera pose. The idea is to solve the image registration problem between the projection of the 3D scan and images whose poses are accurately known with respect to the 3D model. The 2D motion parameters are then mapped to the 3D space for camera pose estimation. Novel image registration techniques are developed which use image edge information combined with traditional approaches to show successful results

    Semantic location extraction from crowdsourced data

    Get PDF
    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction

    Virtual 3D reconstruction of complex urban environments

    Full text link
    [ES] Este trabajo presenta una metodología para la generación de modelos tridimensionales de entornos urbanos. Se utiliza una plataforma terrestre multi-sensores compuesta por un LIDAR, una cámara esférica, GPS y otros sistemas inerciales. Los datos de los sensores están sincronizados con el sistema de navegación y georrefenciados. La metodología de digitalizaciónn se centra en 3 procesos principales. (1) La reconstrucción tridimensional, en el cual se elimina el ruido en los datos 3D y se disminuye la distorsión en las imágenes. Posteriormente se construye una imagen panorámica. (2) La texturización, se describe a detalle el algoritmo para asegurar la menor incertidumbre en el proceso de extracción de color. (3) La generación de mallas, se describe el proceso de mallado basado en octree’s, desde la generación de la semilla, el teselado, así como la eliminación de huecos en las mallas. Por último, se realiza una evaluación cuantitativa de la propuesta y se compara con otros enfoques existen[EN] This paper presents a methodology for the generation of three-dimensional models of urban environments. A multi-sensor terrestrial platform composed of a LIDAR, a spherical camera, GPS and IMU systems is used. The data of the sensors are synchronized with the navigation system and georeferenced. The digitalization methodology is focused on 3 main processes. (1) The three-dimensional reconstruction, in which the noise in the 3D data is eliminated and the distortion in the images is reduced. Later, a panoramic image is built. (2) Texturing, the algorithm is described in detail to ensure the least uncertainty in this color extraction process. (3) Mesh generation, the meshing process based on octree’s is described, from the generation of the seed, the tessellation, as well as the elimination of gaps in the meshes. Finally, a quantitative evaluation of our proposal is made and compared with other existing approaches in the state-of-the-art. The results obtained are discussed in detail.García-Moreno, A.; González-Barbosa, J. (2020). Reconstrucción virtual tridimensional de entornos urbanos complejos. Revista Iberoamericana de Automática e Informática industrial. 17(1):22-33. https://doi.org/10.4995/riai.2019.11203OJS2233171Bernard O Abayowa, Alper Yilmaz, and Russell C Hardie. Automatic registration of optical aerial imagery to a lidar point cloud for generation of city models. ISPRS Journal of Photogrammetry and Remote Sensing, 106:68-81, 2015. https://doi.org/10.1016/j.isprsjprs.2015.05.006Gerardo Atanacio-Jiménez, José-Joel González-Barbosa, Juan B Hurtado-Ramos, Francisco J Ornelas-Rodríguez, Hugo Jiménez-Hernández, Teresa García-Ramirez, and Ricardo González-Barbosa. Lidar velodyne hdl-64e calibration using pattern planes. International Journal on Advanced Robotics Systems, 8(5):70-82, 2011. https://doi.org/10.5772/50900Matthew Brown, Richard Szeliski, and Simon Winder. Multi-image matching using multi-scale oriented patches. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 510-517. IEEE, 2005.Jonathan C Carr, Richard K Beatson, Jon B Cherrie, Tim J Mitchell, W Richard Fright, Bruce C McCallum, and Tim R Evans. Reconstruction and representation of 3d objects with radial basis functions. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 67-76. ACM, 2001.Ke Chen, Weisheng Lu, Fan Xue, Pingbo Tang, and Ling Hin Li. Automatic building information model reconstruction in high-density urban areas: Augmenting multi-source data with architectural knowledge. Automation in Construction, 93:22-34, 2018. https://doi.org/10.1016/j.autcon.2018.05.009Tamal K Dey and Samrat Goswami. Provable surface reconstruction from noisy samples. In Proceedings of the twentieth annual symposium on Computational geometry, pages 330-339. ACM, 2004.Luca Di Angelo, Paolo Di Stefano, and Luigi Giaccari. A new mesh-growing algorithm for fast surface reconstruction. Computer-Aided Design, 43(6): 639-650, 2011. https://doi.org/10.1016/j.cad.2011.02.012Julie Digne. An analysis and implementation of a parallel ball pivoting algorithm. Image Processing On Line, 4:149-168, 2014. https://doi.org/10.5201/ipol.2014.81Damien Garcia. Robust smoothing of gridded data in one and higher dimensions with missing values. Computational statistics & data analysis, 54(4):1167-1178, 2010. https://doi.org/10.1016/j.csda.2009.09.020Angel-Iván García-Moreno, José-Joel Gonzalez-Barbosa, Francisco-Javier Ornelas-Rodriguez, Juan B Hurtado-Ramos, and Marco-Neri Primo-Fuentes. Lidar and panoramic camera extrinsic calibration approach using a pattern plane. In Pattern Recognition. Springer, 2013. https://doi.org/10.1007/978-3-642-38989-4_11Angel-Iván García-Moreno, Denis-Eduardo Hernandez-García, José-Joel Gonzalez-Barbosa, Alfonso Ramírez-Pedraza, Juan B Hurtado-Ramos, and Francisco-Javier Ornelas-Rodriguez. Error propagation and uncertainty analysis between 3d laser scanner and camera. Robotics and Autonomous Systems, 62(6):782-793, 2014. https://doi.org/10.1016/j.robot.2014.02.004Angel-Iván García-Moreno, José-Joel González-Barbosa, Alfonso Ramírez-Pedraza, Juan B Hurtado-Ramos, and Francisco-Javier Ornelas-Rodriguez. Accurate evaluation of sensitivity for calibration between a lidar and a panoramic camera used for remote sensing. Journal of Applied Remote Sensing, 10(2):024002-024002, 2016. https://doi.org/10.1117/1.JRS.10.024002Jianwei Guo, Dong-Ming Yan, Li Chen, Xiaopeng Zhang, Oliver Deussen, and Peter Wonka. Tetrahedral meshing via maximal poisson-disk sampling. Computer Aided Geometric Design, 43:186-199, 2016. https://doi.org/10.1016/j.cagd.2016.02.004Rostam Affendi Hamzah, A Fauzan Kadmin, M Saad Hamid, S Fakhar A Ghani, and Haidi Ibrahim. Improvement of stereo matching algorithm for 3d surface reconstruction. Signal Processing: Image Communication, 65:165-172, 2018. https://doi.org/10.1016/j.image.2018.04.001Chris Harris. Geometry from visual motion. In Active vision, pages 263-284. MIT Press, 1993.C. Hatger and C. Brenner. Extraction of road geometry parameters from laser scanning and existing databases. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 34(3/W13):225-230, 2003.Dorota Iwaszczuk and Uwe Stilla. Camera pose refinement by matching uncertain 3d building models with thermal infrared image sequences for high quality texture extraction. ISPRS Journal of Photogrammetry and Remote Sensing, 132:33-47, 2017. https://doi.org/10.1016/j.isprsjprs.2017.08.006Hansung Kim and Adrian Hilton. Block world reconstruction from spherical stereo image pairs. Computer Vision and Image Understanding, 139:104-121, 2015. https://doi.org/10.1016/j.cviu.2015.04.001Eyal Kushilevitz, Rafail Ostrovsky, and Yuval Rabani. Efficient search for approximate nearest neighbor in high dimensional spaces. SIAM Journal on Computing, 30(2):457-474, 2000. https://doi.org/10.1137/S0097539798347177Maxime Lhuillier. Surface reconstruction from a sparse point cloud by enforcing visibility consistency and topology constraints. Computer Vision and Image Understanding, 175:52-71, 2018. https://doi.org/10.1016/j.cviu.2018.09.007Lingyun Liu and Ioannis Stamos. A systematic approach for 2d-image to 3drange registration in urban environments. Computer Vision and Image Understanding, 116(1):25-37, 2012. https://doi.org/10.1016/j.cviu.2011.07.009Jules Morel, Alexandra Bac, and Cédric Véga. Surface reconstruction of incomplete datasets: A novel poisson surface approach based on csrbf. Computers & Graphics, 74:44-55, 2018. https://doi.org/10.1016/j.cag.2018.05.004Gaurav Pandey, James R McBride, and Ryan M Eustice. Ford campus vision and lidar data set. The International Journal of Robotics Research, 30(13): 1543-1552, 2011. https://doi.org/10.1177/0278364911400640Gaurav Pandey, James R McBride, Silvio Savarese, and Ryan M Eustice. Automatic extrinsic calibration of vision and lidar by maximizing mutual information. Journal of Field Robotics, 2014. https://doi.org/10.1002/rob.21542Yun Shi, Shunping Ji, Xiaowei Shao, Peng Yang, Wenbin Wu, Zhongchao Shi, and Ryosuke Shibasaki. Fusion of a panoramic camera and 2d laser scanner data for constrained bundle adjustment in gps-denied environments. Image and Vision Computing, 40:28-37, 2015. https://doi.org/10.1016/j.imavis.2015.06.002Miao Wang and Yi-Hsing Tseng. Automatic segmentation of lidar data into coplanar point clusters using an octree-based split-and-merge algorithm. Photogrammetric Engineering & Remote Sensing, 76(4):407-420, 2010. https://doi.org/10.14358/PERS.76.4.407Ruisheng Wang, Jeff Bach, Jane Macfarlane, and Frank P Ferrie. A new upsampling method for mobile lidar data. In Applications of Computer Vision (WACV), 2012 IEEE Workshop on, pages 17-24. IEEE, 2012. https://doi.org/10.1109/WACV.2012.6162998Bin Wu, Bailang Yu, Qiusheng Wu, Shenjun Yao, Feng Zhao, Weiqing Mao, and Jianping Wu. A graph-based approach for 3d building model reconstruction from airborne lidar point clouds. Remote Sensing, 9(1):92, 2017. https://doi.org/10.3390/rs9010092Lin Yang, Yehua Sheng, and Bo Wang. 3d reconstruction of building façade with fused data of terrestrial lidar data and optical image. Optik-International Journal for Light and Electron Optics, 127(4):2165-2168, 2016. https://doi.org/10.1016/j.ijleo.2015.11.147Michael Ying Yang, Yanpeng Cao, and John McDonald. Fusion of camera images and laser scans for wide baseline 3d scene alignment in urban environments. ISPRS Journal of Photogrammetry and Remote Sensing, 66(6): S52-S61, 2011. https://doi.org/10.1016/j.isprsjprs.2011.09.004Cheng Yi, Yuan Zhang, Qiaoyun Wu, Yabin Xu, Oussama Remil, Mingqiang Wei, and JunWang. Urban building reconstruction from raw lidar point data. Computer-Aided Design, 93:1-14, 2017. https://doi.org/10.1016/j.cad.2017.07.005Fanyang Zeng and Ruofei Zhong. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud. In IOP Conference Series: Earth and Environmental Science, volume 17, page 012160. IOP Publishing, 2014. https://doi.org/10.1088/1755-1315/17/1/012160SM Iman Zolanvari, Debra F Laefer, and Atteyeh S Natanzi. Three-dimensional building fac¸ade segmentation and opening area detection from point clouds. ISPRS journal of photogrammetry and remote sensing, 143:134-149, 2018. https://doi.org/10.1016/j.isprsjprs.2018.04.00

    Geometric, Semantic, and System-Level Scene Understanding for Improved Construction and Operation of the Built Environment

    Full text link
    Recent advances in robotics and enabling fields such as computer vision, deep learning, and low-latency data passing offer significant potential for developing efficient and low-cost solutions for improved construction and operation of the built environment. Examples of such potential solutions include the introduction of automation in environment monitoring, infrastructure inspections, asset management, and building performance analyses. In an effort to advance the fundamental computational building blocks for such applications, this dissertation explored three categories of scene understanding capabilities: 1) Localization and mapping for geometric scene understanding that enables a mobile agent (e.g., robot) to locate itself in an environment, map the geometry of the environment, and navigate through it; 2) Object recognition for semantic scene understanding that allows for automatic asset information extraction for asset tracking and resource management; 3) Distributed coupling analysis for system-level scene understanding that allows for discovery of interdependencies between different built-environment processes for system-level performance analyses and response-planning. First, this dissertation advanced Simultaneous Localization and Mapping (SLAM) techniques for convenient and low-cost locating capabilities compared with previous work. To provide a versatile Real-Time Location System (RTLS), an occupancy grid mapping enhanced visual SLAM (vSLAM) was developed to support path planning and continuous navigation that cannot be implemented directly on vSLAM’s original feature map. The system’s localization accuracy was experimentally evaluated with a set of visual landmarks. The achieved marker position measurement accuracy ranges from 0.039m to 0.186m, proving the method’s feasibility and applicability in providing real-time localization for a wide range of applications. In addition, a Self-Adaptive Feature Transform (SAFT) was proposed to improve such an RTLS’s robustness in challenging environments. As an example implementation, the SAFT descriptor was implemented with a learning-based descriptor and integrated into a vSLAM for experimentation. The evaluation results on two public datasets proved the feasibility and effectiveness of SAFT in improving the matching performance of learning-based descriptors for locating applications. Second, this dissertation explored vision-based 1D barcode marker extraction for automated object recognition and asset tracking that is more convenient and efficient than the traditional methods of using barcode or asset scanners. As an example application in inventory management, a 1D barcode extraction framework was designed to extract 1D barcodes from video scan of a built environment. The performance of the framework was evaluated with video scan data collected from an active logistics warehouse near Detroit Metropolitan Airport (DTW), demonstrating its applicability in automating inventory tracking and management applications. Finally, this dissertation explored distributed coupling analysis for understanding interdependencies between processes affecting the built environment and its occupants, allowing for accurate performance and response analyses compared with previous research. In this research, a Lightweight Communications and Marshalling (LCM)-based distributed coupling analysis framework and a message wrapper were designed. This proposed framework and message wrapper were tested with analysis models from wind engineering and structural engineering, where they demonstrated the abilities to link analysis models from different domains and reveal key interdependencies between the involved built-environment processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155042/1/lichaox_1.pd
    • …
    corecore