3,636 research outputs found

    Exploiting Full-Waveform Lidar Data and Multiresolution Wavelet Analysis for Vertical Object Detection and Recognition

    Get PDF
    A current challenge in performing airport obstruction surveys using airborne lidar is lack of reliable, automated methods for extracting and attributing vertical objects from the lidar data. This paper presents a new approach to solving this problem, taking advantage of the additional data provided byfull-waveform systems. The procedure entails first deconvolving and georeferencing the lidar waveformdata to create dense, detailed point clouds in which the vertical structure of objects, such as trees, towers, and buildings, is well characterized. The point clouds are then voxelized to produce high-resolution volumes of lidar intensity values, and a 3D wavelet decomposition is computed. Verticalobject detection and recognition is performed in the wavelet domain using a multiresolution template matching approach. The method was tested using lidar waveform data and ground truth collected for project areas in Madison,Wisconsin. Preliminary results demonstrate the potential of the approach

    Investigating Full-Waveform Lidar Data for Detection and Recognition of Vertical Objects

    Get PDF
    A recent innovation in commercially-available topographic lidar systems is the ability to record return waveforms at high sampling frequencies. These “full-waveform” systems provide up to two orders of magnitude more data than “discrete-return” systems. However, due to the relatively limited capabilities of current processing and analysis software, more data does not always translate into more or better information for object extraction applications. In this paper, we describe a new approach for exploiting full waveform data to improve detection and recognition of vertical objects, such as trees, poles, buildings, towers, and antennas. Each waveform is first deconvolved using an expectation-maximization (EM) algorithm to obtain a train of spikes in time, where each spike corresponds to an individual laser reflection. The output is then georeferenced to create extremely dense, detailed X,Y,Z,I point clouds, where I denotes intensity. A tunable parameter is used to control the number of spikes in the deconvolved waveform, and, hence, the point density of the output point cloud. Preliminary results indicate that the average number of points on vertical objects using this method is several times higher than using discrete-return lidar data. The next steps in this ongoing research will involve voxelizing the lidar point cloud to obtain a high-resolution volume of intensity values and computing a 3D wavelet representation. The final step will entail performing vertical object detection/recognition in the wavelet domain using a multiresolution template matching approach

    Representing and retrieving regions using binary partition trees

    Get PDF
    This paper discusses the interest of Binary Partition Trees for image and region representation in the context of indexing and similarity based retrieval. Binary Partition Trees concentrate in a compact and structured way the set of regions that compose an image. Since the tree is able to represent images in a multiresolution way, only simple descriptors need to be attached to the nodes. Moreover, this representation is used for similarity based region retrieval.Peer ReviewedPostprint (published version

    Convolutional Neural Network on Three Orthogonal Planes for Dynamic Texture Classification

    Get PDF
    Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Deep learning methods have shown impressive results and are now the new state of the art for a wide range of computer vision tasks including image and video recognition and segmentation. In particular, Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to a filter bank approach. In this paper, we develop a new approach to DT analysis based on a CNN method applied on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their outputs to obtain a competitive DT classifier. Our results on a wide range of commonly used DT classification benchmark datasets prove the robustness of our approach. Significant improvement of the state of the art is shown on the larger datasets.Comment: 19 pages, 10 figure
    • 

    corecore