1,715 research outputs found

    Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    Get PDF
    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots

    Implementation and assessment of two density-based outlier detection methods over large spatial point clouds

    Get PDF
    Several technologies provide datasets consisting of a large number of spatial points, commonly referred to as point-clouds. These point datasets provide spatial information regarding the phenomenon that is to be investigated, adding value through knowledge of forms and spatial relationships. Accurate methods for automatic outlier detection is a key step. In this note we use a completely open-source workflow to assess two outlier detection methods, statistical outlier removal (SOR) filter and local outlier factor (LOF) filter. The latter was implemented ex-novo for this work using the Point Cloud Library (PCL) environment. Source code is available in a GitHub repository for inclusion in PCL builds. Two very different spatial point datasets are used for accuracy assessment. One is obtained from dense image matching of a photogrammetric survey (SfM) and the other from floating car data (FCD) coming from a smart-city mobility framework providing a position every second of two public transportation bus tracks. Outliers were simulated in the SfM dataset, and manually detected and selected in the FCD dataset. Simulation in SfM was carried out in order to create a controlled set with two classes of outliers: clustered points (up to 30 points per cluster) and isolated points, in both cases at random distances from the other points. Optimal number of nearest neighbours (KNN) and optimal thresholds of SOR and LOF values were defined using area under the curve (AUC) of the receiver operating characteristic (ROC) curve. Absolute differences from median values of LOF and SOR (defined as LOF2 and SOR2) were also tested as metrics for detecting outliers, and optimal thresholds defined through AUC of ROC curves. Results show a strong dependency on the point distribution in the dataset and in the local density fluctuations. In SfM dataset the LOF2 and SOR2 methods performed best, with an optimal KNN value of 60; LOF2 approach gave a slightly better result if considering clustered outliers (true positive rate: LOF2\u2009=\u200959.7% SOR2\u2009=\u200953%). For FCD, SOR with low KNN values performed better for one of the two bus tracks, and LOF with high KNN values for the other; these differences are due to very different local point density. We conclude that choice of outlier detection algorithm very much depends on characteristic of the dataset\u2019s point distribution, no one-solution-fits-all. Conclusions provide some information of what characteristics of the datasets can help to choose the optimal method and KNN values

    Vegetation Detection and Classification for Power Line Monitoring

    Get PDF
    Electrical network maintenance inspections must be regularly executed, to provide a continuous distribution of electricity. In forested countries, the electrical network is mostly located within the forest. For this reason, during these inspections, it is also necessary to assure that vegetation growing close to the power line does not potentially endanger it, provoking forest fires or power outages. Several remote sensing techniques have been studied in the last years to replace the labor-intensive and costly traditional approaches, be it field based or airborne surveillance. Besides the previously mentioned disadvantages, these approaches are also prone to error, since they are dependent of a human operator’s interpretation. In recent years, Unmanned Aerial Vehicle (UAV) platform applicability for this purpose has been under debate, due to its flexibility and potential for customisation, as well as the fact it can fly close to the power lines. The present study proposes a vegetation management and power line monitoring method, using a UAV platform. This method starts with the collection of point cloud data in a forest environment composed of power line structures and vegetation growing close to it. Following this process, multiple steps are taken, including: detection of objects in the working environment; classification of said objects into their respective class labels using a feature-based classifier, either vegetation or power line structures; optimisation of the classification results using point cloud filtering or segmentation algorithms. The method is tested using both synthetic and real data of forested areas containing power line structures. The Overall Accuracy of the classification process is about 87% and 97-99% for synthetic and real data, respectively. After the optimisation process, these values were refined to 92% for synthetic data and nearly 100% for real data. A detailed comparison and discussion of results is presented, providing the most important evaluation metrics and a visual representations of the attained results.Manutenções regulares da rede elétrica devem ser realizadas de forma a assegurar uma distribuição contínua de eletricidade. Em países com elevada densidade florestal, a rede elétrica encontra-se localizada maioritariamente no interior das florestas. Por isso, durante estas inspeções, é necessário assegurar também que a vegetação próxima da rede elétrica não a coloca em risco, provocando incêndios ou falhas elétricas. Diversas técnicas de deteção remota foram estudadas nos últimos anos para substituir as tradicionais abordagens dispendiosas com mão-de-obra intensiva, sejam elas através de vigilância terrestre ou aérea. Além das desvantagens mencionadas anteriormente, estas abordagens estão também sujeitas a erros, pois estão dependentes da interpretação de um operador humano. Recentemente, a aplicabilidade de plataformas com Unmanned Aerial Vehicles (UAV) tem sido debatida, devido à sua flexibilidade e potencial personalização, assim como o facto de conseguirem voar mais próximas das linhas elétricas. O presente estudo propõe um método para a gestão da vegetação e monitorização da rede elétrica, utilizando uma plataforma UAV. Este método começa pela recolha de dados point cloud num ambiente florestal composto por estruturas da rede elétrica e vegetação em crescimento próximo da mesma. Em seguida,múltiplos passos são seguidos, incluindo: deteção de objetos no ambiente; classificação destes objetos com as respetivas etiquetas de classe através de um classificador baseado em features, vegetação ou estruturas da rede elétrica; otimização dos resultados da classificação utilizando algoritmos de filtragem ou segmentação de point cloud. Este método é testado usando dados sintéticos e reais de áreas florestais com estruturas elétricas. A exatidão do processo de classificação é cerca de 87% e 97-99% para os dados sintéticos e reais, respetivamente. Após o processo de otimização, estes valores aumentam para 92% para os dados sintéticos e cerca de 100% para os dados reais. Uma comparação e discussão de resultados é apresentada, fornecendo as métricas de avaliação mais importantes e uma representação visual dos resultados obtidos

    Panoramic Stereovision and Scene Reconstruction

    Get PDF
    With advancement of research in robotics and computer vision, an increasingly high number of applications require the understanding of a scene in three dimensions. A variety of systems are deployed to do the same. This thesis explores a novel 3D imaging technique. This involves the use of catadioptric cameras in a stereoscopic arrangement. A secondary system aims to stabilize the system in the event that the cameras are misaligned during operation. The system provides a stark advantage due to it being a cost effective alternative to present day standard state-of-the-art systems that achieve the same goal of 3D imaging. The compromise lies in the quality of depth estimation, which can be overcome with a different imager and calibration. The result was a panoramic disparity map generated by the system

    Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera

    Get PDF
    The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV). Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform. Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future

    A Method for detection and quantification of building damage using post-disaster LiDAR data

    Get PDF
    There is a growing need for rapid and accurate damage assessment following natural disasters, terrorist attacks, and other crisis situations. The use of light detection and ranging (LiDAR) data to detect and quantify building damage following a natural disaster was investigated in this research. Using LiDAR data collected by the Rochester Institute of Technology (RIT) just days after the January 12, 2010 Haiti earthquake, a set of processes was developed for extracting buildings in urban environments and assessing structural damage. Building points were separated from the rest of the point cloud using a combination of point classification techniques involving height, intensity, and multiple return information, as well as thresholding and morphological filtering operations. Damage was detected by measuring the deviation between building roof points and dominant planes found using a normal vector and height variance approach. The devised algorithms were incorporated into a Matlab graphical user interface (GUI), which guided the workflow and allowed for user interaction. The semi-autonomous tool ingests a discrete-return LiDAR point cloud of a post-disaster scene, and outputs a building damage map highlighting damaged and collapsed buildings. The entire approach was demonstrated on a set of six validation sites, carefully selected from the Haiti LiDAR data. A combined 85.6% of the truth buildings in all of the sites were detected, with a standard deviation of 15.3%. Damage classification results were evaluated against the Global Earth Observation - Catastrophe Assessment Network (GEO-CAN) and Earthquake Engineering Field Investigation Team (EEFIT) truth assessments. The combined overall classification accuracy for all six sites was 68.3%, with a standard deviation of 9.6%. Results were impacted by imperfect validation data, inclusion of non-building points, and very diverse environments, e.g., varying building types, sizes, and densities. Nevertheless, the processes exhibited significant potential for detecting buildings and assessing building-level damage
    corecore