3 research outputs found

    Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels

    Get PDF
    Texture information allows characterizing the regions of interest in a scene. It refers to the spatial organization of the fundamental microstructures in natural images. Texture extraction has been a challenging problem in the field of image processing for decades. In this paper, different techniques based on the classic Bag of Words (BoW) approach for solving the texture extraction problem in the case of hyperspectral images of the Earth surface are proposed. In all cases the texture extraction is performed inside regions of the scene called superpixels and the algorithms profit from the information available in all the bands of the image. The main contribution is the use of superpixel segmentation to obtain irregular patches from the images prior to texture extraction. Texture descriptors are extracted from each superpixel. Three schemes for texture extraction are proposed: codebook-based, descriptor-based, and spectral-enhanced descriptor-based. The first one is based on a codebook generator algorithm, while the other two include additional stages of keypoint detection and description. The evaluation is performed by analyzing the results of a supervised classification using Support Vector Machines (SVM), Random Forest (RF), and Extreme Learning Machines (ELM) after the texture extraction. The results show that the extraction of textures inside superpixels increases the accuracy of the obtained classification map. The proposed techniques are analyzed over different multi and hyperspectral datasets focusing on vegetation species identification. The best classification results for each image in terms of Overall Accuracy (OA) range from 81.07% to 93.77% for images taken at a river area in Galicia (Spain), and from 79.63% to 95.79% for a vast rural region in China with reasonable computation timesThis work was supported in part by the Civil Program UAVs Initiative, promoted by the Xunta de Galicia and developed in partnership with the Babcock Company to promote the use of unmanned technologies in civil services. We also have to acknowledge the support by Ministerio de Ciencia e Innovación, Government of Spain (grant number PID2019-104834GB-I00), and Consellería de Educación, Universidade e Formación Profesional (ED431C 2018/19, and accreditation 2019-2022 ED431G-2019/04). All are cofunded by the European Regional Development Fund (ERDF)S

    Watershed Monitoring in Galicia from UAV Multispectral Imagery Using Advanced Texture Methods

    Get PDF
    Watershed management is the study of the relevant characteristics of a watershed aimed at the use and sustainable management of forests, land, and water. Watersheds can be threatened by deforestation, uncontrolled logging, changes in farming systems, overgrazing, road and track construction, pollution, and invasion of exotic plants. This article describes a procedure to automatically monitor the river basins of Galicia, Spain, using five-band multispectral images taken by an unmanned aerial vehicle and several image processing algorithms. The objective is to determine the state of the vegetation, especially the identification of areas occupied by invasive species, as well as the detection of man-made structures that occupy the river basin using multispectral images. Since the territory to be studied occupies extensive areas and the resulting images are large, techniques and algorithms have been selected for fast execution and efficient use of computational resources. These techniques include superpixel segmentation and the use of advanced texture methods. For each one of the stages of the method (segmentation, texture codebook generation, feature extraction, and classification), different algorithms have been evaluated in terms of speed and accuracy for the identification of vegetation and natural and artificial structures in the Galician riversides. The experimental results show that the proposed approach can achieve this goal with speed and precisionThis work was supported in part by the Civil Program UAVs Initiative, promoted by the Xunta de Galicia and developed in partnership with the Babcock company to promote the use of unmanned technologies in civil services. We also have to acknowledge the support by the Ministerio de Ciencia e Innovación, Government of Spain (grant number PID2019-104834GB-I00), and Consellería de Educación, Universidade e Formación Profesional (grant number ED431C 2018/19, and accreditation 2019–2022 ED431G-2019/04). All are co-funded by the European Regional Development Fund (ERDF)S

    Class-semantic textons with superpixel neighborhoods for natural roadside vegetation classification

    No full text
    Accurate classification of roadside vegetation plays a significant role in many practical applications, such as vegetation growth management and fire hazard identification. However, relatively little attention has been paid to this field in previous studies, particularly for natural data. In this paper, a novel approach is proposed for natural roadside vegetation classification, which generates class-sematic color-texture textons at a pixel level and then makes a collective classification decision in a neighborhood of superpixels. It first learns two individual sets of bag-of-word visual dictionaries (i.e. class-semantic textons) from color and filter-bank texture features respectively for each object. The color and texture features of all pixels in each superpixel in a test image are mapped into one of the learnt textons using the nearest Euclidean distance, which are further aggregated into class probabilities for each superpixel. The class probabilities in each superpixel and its neighboring superpixels are combined using a linear weighting mixing, and the classification of this superpixel is finally achieved by assigning itthe class with the highest class probability. Our approach shows higher accuracy than four benchmarking approaches on both a cropped region and an image datasets collected by the Department of Transport and Main Roads, Queensland, Australia
    corecore