57,188 research outputs found

    Fast and robust 3D feature extraction from sparse point clouds

    Get PDF
    Matching 3D point clouds, a critical operation in map building and localization, is difficult with Velodyne-type sensors due to the sparse and non-uniform point clouds that they produce. Standard methods from dense 3D point clouds are generally not effective. In this paper, we describe a featurebased approach using Principal Components Analysis (PCA) of neighborhoods of points, which results in mathematically principled line and plane features. The key contribution in this work is to show how this type of feature extraction can be done efficiently and robustly even on non-uniformly sampled point clouds. The resulting detector runs in real-time and can be easily tuned to have a low false positive rate, simplifying data association. We evaluate the performance of our algorithm on an autonomous car at the MCity Test Facility using a Velodyne HDL-32E, and we compare our results against the state-of-theart NARF keypoint detector. © 2016 IEEE

    Progress on isprs benchmark on multisensory indoor mapping and positioning

    Get PDF
    This paper presents the design of the benchmark dataset on multisensory indoor mapping and position (MIMAP) which is sponsored by ISPRS scientific initiatives. The benchmark dataset including point clouds captured by indoor mobile laser scanning system (IMLS) in indoor environments of various complexity. The benchmark aims to stimulate and promote research in the following three fields: (1) SLAM-based indoor point cloud generation; (2) automated BIM feature extraction from point clouds, with an emphasis on theelements, such as floors, walls, ceilings, doors, windows, stairs, lamps, switches, air outlets, that are involved in building managementand navigation tasks ; and (3) low-cost multisensory indoor positioning, focusing on the smartphone platform solution. MIMAP provides a common framework for the evaluation and comparison of LiDAR-based SLAM, BIM feature extraction, and smartphoneindoor positioning methods

    Dense Point Cloud Extraction From Oblique Imagery

    Get PDF
    With the increasing availability of low-cost digital cameras with small or medium sized sensors, more and more airborne images are available with high resolution, which enhances the possibility in establishing three dimensional models for urban areas. The high accuracy of representation of buildings in urban areas is required for asset valuation or disaster recovery. Many automatic methods for modeling and reconstruction are applied to aerial images together with Light Detection and Ranging (LiDAR) data. If LiDAR data are not provided, manual steps must be applied, which results in semi-automated technique. The automated extraction of 3D urban models can be aided by the automatic extraction of dense point clouds. The more dense the point clouds, the easier the modeling and the higher the accuracy. Also oblique aerial imagery provides more facade information than nadir images, such as building height and texture. So a method for automatic dense point cloud extraction from oblique images is desired. In this thesis, a modified workflow for the automated extraction of dense point clouds from oblique images is proposed and tested. The result reveals that this modified workflow works well and a very dense point cloud can be extracted from only two oblique images with slightly higher accuracy in flat areas than the one extracted by the original workflow. The original workflow was established by previous research at the Rochester Institute of Technology (RIT) for point cloud extraction from nadir images. For oblique images, a first modification is proposed in the feature detection part by replacing the Scale-Invariant Feature Transform (SIFT) algorithm with the Affine Scale-Invariant Feature Transform (ASIFT) algorithm. After that, in order to realize a very dense point cloud, the Semi-Global Matching (SGM) algorithm is implemented in the second modification to compute the disparity map from a stereo image pair, which can then be used to reproject pixels back to a point cloud. A noise removal step is added in the third modification. The point cloud from the modified workflow is much denser compared to the result from the original workflow. An accuracy assessment is made in the end to evaluate the point cloud extracted from the modified workflow. From the two flat areas, subsets of points are selected from both original and modified workflow, and then planes are fitted to them, respectively. The Mean Squared Error (MSE) of the points to the fitted plane is compared. The point subsets from the modified workflow have slightly lower MSEs than the ones from the original workflow, respectively. This suggests a much more dense and more accurate point cloud can lead to clear roof borders for roof extraction and improve the possibility of 3D feature detection for 3D point cloud registration

    Registration And Feature Extraction From Terrestrial Laser Scanner Point Clouds For Aerospace Manufacturing

    Get PDF
    Aircraft wing manufacture is becoming increasingly digitalised. For example, it is becoming possible to produce on-line digital representations of individual structural elements, components and tools as they are deployed during assembly processes. When it comes to monitoring a manufacturing environment, imaging systems can be used to track objects as they move about the workspace, comparing actual positions, alignments, and spatial relationships with the digital representation of the manufacturing process. Active imaging systems such as laser scanners and laser trackers can capture measurements within the manufacturing environment, which can be used to deduce information about both the overall stage of manufacture and progress of individual tasks. This paper is concerned with the in-line extraction of spatial information such as the location and orientation of drilling templates which are used with hand drilling tools to ensure drilled holes are accurately located. In this work, a construction grade terrestrial laser scanner, the Leica RTC360, is used to capture an example aircraft wing section in mid-assembly from several scan locations. Point cloud registration uses 1.5"white matte spherical targets that are interchangeable with the SMR targets used by the Leica AT960 MR laser tracker, ensuring that scans are connected to an established metrology control network used to define the coordinate space. Point cloud registration was achieved to sub-millimetre accuracy when compared to the laser tracker network. The location of drilling templates on the surface of the wing skin are automatically extracted from the captured and registered point clouds. When compared to laser tracker referenced hole centres, laser scanner drilling template holes agree to within 0.2mm

    Accessible path finding for historic urban environments: feature extraction and vectorization from point clouds

    Get PDF
    Sidewalk inventory is a topic whose importance is increasing together with the widespread use of smart city management. In order to manage the city properly and to make informed decisions, it is necessary to know the real conditions of the city. Furthermore, when planning and calculating cultural routes within the city, these routes must take into account the specific needs of all users. Therefore, it is important to know the conditions of the city’s sidewalk network and also their physical and geometrical characteristics. Typically, sidewalk network are generated basing on existing cartographic data, and sidewalk attributes are gathered through crowdsourcing. In this paper, the sidewalk network of an historic city was produced starting from point cloud data. The point cloud was semantically segmented in ”roads” and ”sidewalks”, and then the cluster of points of sidewalks surfaces were used to compute sidewalk attributes and to generate a vector layer composed of nodes and edges. The vector layer was then used to compute accessible paths between Points of Interest, using QGIS. The tests made on a real case study, the historic city and UNESCO site of Sabbioneta (Italy), shows a vectorization accuracy of 98.7%. In future, the vector layers and the computed paths could be used to generate maps for city planners, and to develop web or mobile phones routing apps.Ministerio de Ciencia e Innovación | Ref. RYC2020-029193-

    Feature-assisted interactive geometry reconstruction in 3D point clouds using incremental region growing

    Full text link
    Reconstructing geometric shapes from point clouds is a common task that is often accomplished by experts manually modeling geometries in CAD-capable software. State-of-the-art workflows based on fully automatic geometry extraction are limited by point cloud density and memory constraints, and require pre- and post-processing by the user. In this work, we present a framework for interactive, user-driven, feature-assisted geometry reconstruction from arbitrarily sized point clouds. Based on seeded region-growing point cloud segmentation, the user interactively extracts planar pieces of geometry and utilizes contextual suggestions to point out plane surfaces, normal and tangential directions, and edges and corners. We implement a set of feature-assisted tools for high-precision modeling tasks in architecture and urban surveying scenarios, enabling instant-feedback interactive point cloud manipulation on large-scale data collected from real-world building interiors and facades. We evaluate our results through systematic measurement of the reconstruction accuracy, and interviews with domain experts who deploy our framework in a commercial setting and give both structured and subjective feedback.Comment: 13 pages, submitted to Computers & Graphics Journa

    A Generalized Multi-Modal Fusion Detection Framework

    Full text link
    LiDAR point clouds have become the most common data source in autonomous driving. However, due to the sparsity of point clouds, accurate and reliable detection cannot be achieved in specific scenarios. Because of their complementarity with point clouds, images are getting increasing attention. Although with some success, existing fusion methods either perform hard fusion or do not fuse in a direct manner. In this paper, we propose a generic 3D detection framework called MMFusion, using multi-modal features. The framework aims to achieve accurate fusion between LiDAR and images to improve 3D detection in complex scenes. Our framework consists of two separate streams: the LiDAR stream and the camera stream, which can be compatible with any single-modal feature extraction network. The Voxel Local Perception Module in the LiDAR stream enhances local feature representation, and then the Multi-modal Feature Fusion Module selectively combines feature output from different streams to achieve better fusion. Extensive experiments have shown that our framework not only outperforms existing benchmarks but also improves their detection, especially for detecting cyclists and pedestrians on KITTI benchmarks, with strong robustness and generalization capabilities. Hopefully, our work will stimulate more research into multi-modal fusion for autonomous driving tasks
    • …
    corecore