652 research outputs found

    Review of 3D Point Cloud Data Segmentation Methods

    Get PDF

    The Application of LiDAR to Assessment of Rooftop Solar Photovoltaic Deployment Potential in a Municipal District Unit

    Get PDF
    A methodology is provided for the application of Light Detection and Ranging (LiDAR) to automated solar photovoltaic (PV) deployment analysis on the regional scale. Challenges in urban information extraction and management for solar PV deployment assessment are determined and quantitative solutions are offered. This paper provides the following contributions: (i) a methodology that is consistent with recommendations from existing literature advocating the integration of cross-disciplinary competences in remote sensing (RS), GIS, computer vision and urban environmental studies; (ii) a robust methodology that can work with low-resolution, incomprehensive data and reconstruct vegetation and building separately, but concurrently; (iii) recommendations for future generation of software. A case study is presented as an example of the methodology. Experience from the case study such as the trade-off between time consumption and data quality are discussed to highlight a need for connectivity between demographic information, electrical engineering schemes and GIS and a typical factor of solar useful roofs extracted per method. Finally, conclusions are developed to provide a final methodology to extract the most useful information from the lowest resolution and least comprehensive data to provide solar electric assessments over large areas, which can be adapted anywhere in the world

    View synthesis for pose computation

    Get PDF
    International audienceGeometrical registration of a query image with respect to a 3D model, or pose estimation, is the cornerstone of many computer vision applications. It is often based on the matching of local photometric descriptors invariant to limited viewpoint changes. However, when the query image has been acquired from a camera position not covered by the model images, pose estimation is often not accurate and sometimes even fails, precisely because of the limited invariance of descriptors. In this paper, we propose to add descriptors to the model, obtained from synthesized views associated with virtual cameras completing the covering of the scene by the real cameras. We propose an efficient strategy to localize the virtual cameras in the scene and generate valuable descriptors from synthetic views. We also discuss a guided sampling strategy for registration in this context. Experiments show that the accuracy of pose estimation is dramatically improved when large viewpoint changes makes the matching of classic descriptors a challenging task

    Pyörivien monilaserkeilainjärjestelmien geometrinen kalibrointi

    Get PDF
    The introduction of light-weight and low-cost multi-beam laser scanners provides ample opportunities in positioning and mapping as well as automation and robotics. The fields of view (FOV) of these sensors can be further expanded by actuation, for example by rotation. These rotating multi-beam lidar (RMBL) systems can provide fast and expansive coverage of the geometries of spaces, but the nature of the sensors and their actuation leave room for improvement in accuracy and precision. Geometric calibration methods addressing this space have been proposed, and this thesis reviews a selection of these methods and evaluates their performance when applied to a set of data samples collected using a custom RMBL platform and six Velodyne multi-beam sensors (one VLP-16 Lite, four VLP-16s and one VLP-32C). The calibration algorithms under inspection are unsupervised and data-based, and they are quantitatively compared to a target-based calibration performed using a high-accuracy point cloud obtained using a terrestrial laser scanner as a reference. The data-based calibration methods are automatic plane detection and fitting, a method based on local planarity and a method based on the information-theoretic concept of information entropy. It is found that of these, the plane-fitting and entropy-based measures for point cloud quality obtain the best calibration results.Kevyet ja edulliset monilaserkeilaimet tuovat uusia mahdollisuuksia paikannus- ja kartoitusaloille mutta myös automaatioon ja robotiikkaan. Näiden sensorien näköaloja voidaan kasvattaa entisestään esimerkiksi pyörittämällä, ja näin toteutettavat pyörivät monilaserkeilainjärjestelmät tuottavat nopeasti kattavaa geometriaa niitä ympäröivistä tiloista. Sensorien rakenne ja järjestelmän liikkuvuus lisäävät kuitenkin kohinaa ja epävarmuutta mittauksissa, minkä vuoksi erilaisia geometrisia kalibrointimenetelmiä onkin ehdotettu aiemmassa tutkimuksessa. Tässä diplomityössä esitellään valikoituja kalibrointimenetelmiä ja arvioidaan niiden tuloksia koeasetelmassa, jossa pyörivälle alustalle asennetuilla Velodyne-monilaserkeilaimilla (yksi VLP-16 Lite, neljä VLP-16:aa ja yksi VLP-32C) mitataan liikuntasalin geometriaa. Tarkasteltavat menetelmät ovat valvomattomia ja vain mittauksiin perustuvia ja niitä verrataan samasta tilasta hankittuun tarkkaan maalaserkeilausaineistoon. Menetelmiä ovat tasojen automaattinen etsintä ja sovitus, paikalliseen tasomaisuuteen perustuva menetelmä sekä informaatioteoreettiseen entropiaan perustuva menetelmä. Näistä tasojen sovitus ja entropiamenetelmä saavuttivat parhaat kalibrointitulokset referenssikalibraatioon verrattaessa

    Automatic Super-Surface Removal in Complex 3D Indoor Environments Using Iterative Region-Based RANSAC

    Get PDF
    Removing bounding surfaces such as walls, windows, curtains, and floor (i.e., super-surfaces) from a point cloud is a common task in a wide variety of computer vision applications (e.g., object recognition and human tracking). Popular plane segmentation methods such as Random Sample Consensus (RANSAC), are widely used to segment and remove surfaces from a point cloud. However, these estimators easily result in the incorrect association of foreground points to background bounding surfaces because of the stochasticity of randomly sampling, and the limited scene-specific knowledge used by these approaches. Additionally, identical approaches are generally used to detect bounding surfaces and surfaces that belong to foreground objects. Detecting and removing bounding surfaces in challenging (i.e., cluttered and dynamic) real-world scene can easily result in the erroneous removal of points belonging to desired foreground objects such as human bodies. To address these challenges, we introduce a novel super-surface removal technique for 3D complex indoor environments. Our method was developed to work with unorganized data captured from commercial depth sensors and supports varied sensor perspectives. We begin with preprocessing steps and dividing the input point cloud into four overlapped local regions. Then, we apply an iterative surface removal approach to all four regions to segment and remove the bounding surfaces. We evaluate the performance of our proposed method in terms of four conventional metrics: specificity, precision, recall, and F1 score, on three generated datasets representing different indoor environments. Our experimental results demonstrate that our proposed method is a robust super-surface removal and size reduction approach for complex 3D indoor environments while scoring the four evaluation metrics between 90% and 99%

    Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities

    Full text link
    Visual-Inertial Odometry (VIO) algorithms typically rely on a point cloud representation of the scene that does not model the topology of the environment. A 3D mesh instead offers a richer, yet lightweight, model. Nevertheless, building a 3D mesh out of the sparse and noisy 3D landmarks triangulated by a VIO algorithm often results in a mesh that does not fit the real scene. In order to regularize the mesh, previous approaches decouple state estimation from the 3D mesh regularization step, and either limit the 3D mesh to the current frame or let the mesh grow indefinitely. We propose instead to tightly couple mesh regularization and state estimation by detecting and enforcing structural regularities in a novel factor-graph formulation. We also propose to incrementally build the mesh by restricting its extent to the time-horizon of the VIO optimization; the resulting 3D mesh covers a larger portion of the scene than a per-frame approach while its memory usage and computational complexity remain bounded. We show that our approach successfully regularizes the mesh, while improving localization accuracy, when structural regularities are present, and remains operational in scenes without regularities.Comment: 7 pages, 5 figures, ICRA accepte

    Efficient 3D Segmentation, Registration and Mapping for Mobile Robots

    Get PDF
    Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve the same or better results in the same or less time than related sophisticated approaches. In the context of robots operating in real-world environments, key challenges are perceiving objects of interest and obstacles as well as building maps of the environment and localizing therein. The goal of this thesis is to carefully analyze such problem formulations, to deduce valid assumptions and simplifications, and to develop simple solutions that are both robust and fast. All approaches make use of sensors capturing 3D information, such as consumer RGBD cameras. Comparative evaluations show the performance of the developed approaches. For identifying objects and regions of interest in manipulation tasks, a real-time object segmentation pipeline is proposed. It exploits several common assumptions of manipulation tasks such as objects being on horizontal support surfaces (and well separated). It achieves real-time performance by using particularly efficient approximations in the individual processing steps, subsampling the input data where possible, and processing only relevant subsets of the data. The resulting pipeline segments 3D input data with up to 30Hz. In order to obtain complete segmentations of the 3D input data, a second pipeline is proposed that approximates the sampled surface, smooths the underlying data, and segments the smoothed surface into coherent regions belonging to the same geometric primitive. It uses different primitive models and can reliably segment input data into planes, cylinders and spheres. A thorough comparative evaluation shows state-of-the-art performance while computing such segmentations in near real-time. The second part of the thesis addresses the registration of 3D input data, i.e., consistently aligning input captured from different view poses. Several methods are presented for different types of input data. For the particular application of mapping with micro aerial vehicles where the 3D input data is particularly sparse, a pipeline is proposed that uses the same approximate surface reconstruction to exploit the measurement topology and a surface-to-surface registration algorithm that robustly aligns the data. Optimization of the resulting graph of determined view poses then yields globally consistent 3D maps. For sequences of RGBD data this pipeline is extended to include additional subsampling steps and an initial alignment of the data in local windows in the pose graph. In both cases, comparative evaluations show a robust and fast alignment of the input data
    • …
    corecore