1,249 research outputs found

    Integrating Local and Global Error Statistics for Multi-Scale RBF Network Training: An Assessment on Remote Sensing Data

    Get PDF
    Background This study discusses the theoretical underpinnings of a novel multi-scale radial basis function (MSRBF) neural network along with its application to classification and regression tasks in remote sensing. The novelty of the proposed MSRBF network relies on the integration of both local and global error statistics in the node selection process. Methodology and Principal Findings The method was tested on a binary classification task, detection of impervious surfaces using a Landsat satellite image, and a regression problem, simulation of waveform LiDAR data. In the classification scenario, results indicate that the MSRBF is superior to existing radial basis function and back propagation neural networks in terms of obtained classification accuracy and training-testing consistency, especially for smaller datasets. The latter is especially important as reference data acquisition is always an issue in remote sensing applications. In the regression case, MSRBF provided improved accuracy and consistency when contrasted with a multi kernel RBF network. Conclusion and Significance Results highlight the potential of a novel training methodology that is not restricted to a specific algorithmic type, therefore significantly advancing machine learning algorithms for classification and regression tasks. The MSRBF is expected to find numerous applications within and outside the remote sensing field

    Efficient large-scale airborne LiDAR data classification via fully convolutional network

    Get PDF
    Nowadays, we are witnessing an increasing availability of large-scale airborne LiDAR (Light Detection and Ranging) data, that greatly improve our knowledge of urban areas and natural environment. In order to extract useful information from these massive point clouds, appropriate data processing is required, including point cloud classification. In this paper we present a deep learning method to efficiently perform the classification of large-scale LiDAR data, ensuring a good trade-off between speed and accuracy. The algorithm employs the projection of the point cloud into a two-dimensional image, where every pixel stores height, intensity, and echo information of the point falling in the pixel. The image is then segmented by a Fully Convolutional Network (FCN), assigning a label to each pixel and, consequently, to the corresponding point. In particular, the proposed approach is applied to process a dataset of 7700\u2009km2 that covers the entire Friuli Venezia Giulia region (Italy), allowing to distinguish among five classes (i ground, vegetation, roof, overground and power line/i), with an overall accuracy of 92.9%

    Intelligent Traffic Monitoring Systems for Vehicle Classification: A Survey

    Full text link
    A traffic monitoring system is an integral part of Intelligent Transportation Systems (ITS). It is one of the critical transportation infrastructures that transportation agencies invest a huge amount of money to collect and analyze the traffic data to better utilize the roadway systems, improve the safety of transportation, and establish future transportation plans. With recent advances in MEMS, machine learning, and wireless communication technologies, numerous innovative traffic monitoring systems have been developed. In this article, we present a review of state-of-the-art traffic monitoring systems focusing on the major functionality--vehicle classification. We organize various vehicle classification systems, examine research issues and technical challenges, and discuss hardware/software design, deployment experience, and system performance of vehicle classification systems. Finally, we discuss a number of critical open problems and future research directions in an aim to provide valuable resources to academia, industry, and government agencies for selecting appropriate technologies for their traffic monitoring applications.Comment: Published in IEEE Acces

    Living in a Material World: Learning Material Properties from Full-Waveform Flash Lidar Data for Semantic Segmentation

    Full text link
    Advances in lidar technology have made the collection of 3D point clouds fast and easy. While most lidar sensors return per-point intensity (or reflectance) values along with range measurements, flash lidar sensors are able to provide information about the shape of the return pulse. The shape of the return waveform is affected by many factors, including the distance that the light pulse travels and the angle of incidence with a surface. Importantly, the shape of the return waveform also depends on the material properties of the reflecting surface. In this paper, we investigate whether the material type or class can be determined from the full-waveform response. First, as a proof of concept, we demonstrate that the extra information about material class, if known accurately, can improve performance on scene understanding tasks such as semantic segmentation. Next, we learn two different full-waveform material classifiers: a random forest classifier and a temporal convolutional neural network (TCN) classifier. We find that, in some cases, material types can be distinguished, and that the TCN generally performs better across a wider range of materials. However, factors such as angle of incidence, material colour, and material similarity may hinder overall performance.Comment: In Proceedings of the Conference on Robots and Vision (CRV'23), Montreal, Canada, Jun. 6-8, 202
    • …
    corecore