4,951 research outputs found

    Evaluation of pixel based and object based classification methods for land cover mapping with high spatial resolution satellite imagery, in the Amazonas, Brazil

    Get PDF
    In the state of Acre, Brazil, there is ongoing land use change, where inhabitants of this part of the Amazonian rainforest practice shifting agriculture. Practicing this type of agriculture is, according to the SKY Rainforest Rescue organization, damaging to forest ecosystems. This organization aims to educate people in how to maintain sustainable agriculture. By monitoring this shift in agricultural practices with the use of remotely sensed data, the organization can follow the development. In this thesis, an image with high spatial resolution from the SPOT-5 satellite is used to evaluate which classification method is most appropriate for monitoring land use change in this specific area. Three methods are tested; two pixels based and one object based. The pixel based methods are the Support Vector Machine (SVM) with a Radial Basis Function (RBF) kernel and the Maximum Likelihood Classifier (MLC), and the object based method is segmented with Multi Resolution Segmentation (MRS) and classified with the k-Nearest Neighbor (kNN). The parameters gamma and penalty parameter C in the SVM with an RBF kernel were estimated by a k-fold cross validation and grid search method; and for the MLC, an assumption that each class had an equal probability distribution was made. For the object based approach the first step was segmentation; for the MRS there are three parameters: scale, shape and compactness. The scale parameter was set by using an algorithm that was based on comparing local variance; shape and compactness were defined based on previous studies and visual evaluation of the segments. All three methods will produce two classified maps each; one where the feature space consists of the three original wavebands (green, red and NIR) and one where the feature space is of six dimensions that include the original three wavebands and three texture derivations, one from each original band. The texture is derived from the co-occurrence GLCM method, which can be used to calculate 14 different texture measures. The three most suitable texture derivations were the contrast texture measure from the green and NIR band, and an entropy texture derived from the red band. When combining these three texture derivations with the original bands, the classes were further separated. The original image was also lowered in resolution, from 2.5m to 25m in pixel size. However, this did not generate either higher or similar overall accuracy compared to any of the high spatial resolution classified images. The moderate spatial resolution classifications were only computed with the MLC and SVM due to the inefficiency of an object based image analysis method when used with moderate spatial resolution. Of these six classifications, only two exceeded the 85% threshold of an acceptable overall accuracy. These were the SVM (86.8%) and OB-kNN (86.2%), which included the texture analysis. None of those classifications with only the three original bands exceeded this threshold. In conclusion, the object based method is the most suitable approach for this dataset because: 1) the parameter optimization is less subjective, 2) computational time is relatively lower, 3) the classes in the image are more cohesive and 4) there is less need for post-classification filtering.MÀnniskor boende i Brasiliens regnskogar livnÀr sig pÄ svedjebruk, vilket Àr en jordbruksmetod dÀr en först hugger ned skogen för att sen brÀnna resterande stubbar och annan vegetation. Jordbruksmetoden Àr, enligt SKY Rainforest Rescue, en ohÄllbar metod som kan försÀmra regnskogens ekosystem och dÀrmed dess ekosystemtjÀnster som mÀnniskan har kommit att bli beroende av. Organisationen arbetar för att invÄnarna ska lÀra sig att bruka en mer hÄllbar metod och för att övervaka utvecklingen av projektet anvÀnder sig SKY Rainforest Rescue av fjÀrranalys. Med hjÀlp av satellitbilder kan jordens yta studeras frÄn ett avstÄnd vilket genererar en god överblick av ett större omrÄde vilket kan vara att föredra i den hÀr studien. Analyserna utgÄr frÄn bilder tagna av sensorer som Àr placerade pÄ satelliter, vilka kretsar kring jorden i en omloppsbana och samtidigt registrerar bilder. Varje bild bestÄr av ett visst antal band dÀr varje band representerar ett spektralt intervall t.ex. synligt ljus som grön, röd och blÄ, i det elektromagnetiska spektrumet. Högupplösta bilder Àr ett resultat av ny teknik som kommit ut pÄ marknaden och det har med den utvecklingen uppstÄtt frÄgor om hur en ska behandla satellitbilder i framtiden. DÀrför Àr det viktigt att utvÀrdera och utveckla metoder för bildbehandling. I den hÀr studien anvÀnds satellitbilder som Àr högupplösta, dÀr en pixel motsvarar 2.5x2.5m pÄ jordytan. Tre olika metoder anvÀnds för att framstÀlla markanvÀndningskartor för att finna den mest optimala metoden för just den plasten och typ av bild. Metoderna Àr klassificeringsmetoder som grundar sig pÄ pixlars digitala nummer, en pixel kan ha ett vÀrde mellan 0-255 dÀr varje nummer representerar en fÀrg. TvÄ av dessa Àr baserade pÄ varje pixels enskilda spektrala vÀrden, den tredje segmenterar ihop nÀrliggande pixlar med liknande vÀrden till objekt och berÀknar ett spektralt medelvÀrde av pixlarna tillhörande objekten. En stor skillnad mellan de tvÄ metoderna Àr att i den objektbaserade spelar en pixels intilliggande pixlar en stor roll, medan en pixelbaserad metod behandlar varje pixel enskilt oberoende utav grannpixlar. I och med högupplösta bilder kan intill liggande pixlar spela en större roll eftersom objekt t.ex. ett trÀd kan bestÄ av flera pixlar med varierande spektrala vÀrden. En metod som kan minska det problem som uppstÄr Àr att analysera en bilds textur, alltsÄ variationen av grÄtoner i en bild. En markanvÀndningskarta mÄste valideras innan den kan accepteras som riktig. Validering Àr baserad pÄ att jÀmföra stickprov frÄn kartan med den faktiska marken och pÄ det viset skatta hur bra kartan stÀmmer överens med verkligheten. Enligt tidigare studier ska den generella procenten av korrekt karterade punkter överstiga 85 % för att kartan i frÄga ska accepteras som riktig och representativ för omrÄdet. I studien framstÀlls sex kartor, baserat pÄ olika metoder frÄn en högupplöst satellitbild och tvÄ kartor frÄn samma bild men med lÀgre upplösning. Endast tvÄ av de Ätta kartorna hade högre Àn 85 % korrekt karterade markanvÀndningsklasser. Den ena Àr baserad pÄ enskilda pixlar (86.8%) och den andra Àr baserad pÄ segmenterade pixlar (86.2%), vad metoderna har gemensamt Àr att de bÄde inkluderar en texturanalys. Den objektbaserad Àr dock att föredra pÄ grund av mindre komplex algoritm, mindre tidskrÀvande och ser visuellt bÀttre ut

    Supervised / unsupervised change detection

    Get PDF
    The aim of this deliverable is to provide an overview of the state of the art in change detection techniques and a critique of what could be programmed to derive SENSUM products. It is the product of the collaboration between UCAM and EUCENTRE. The document includes as a necessary requirement a discussion about a proposed technique for co-registration. Since change detection techniques require an assessment of a series of images and the basic process involves comparing and contrasting the similarities and differences to essentially spot changes, co-registration is the first step. This ensures that the user is comparing like for like. The developed programs would then be used on remotely sensed images for applications in vulnerability assessment and post-disaster recovery assessment and monitoring. One key criterion is to develop semi-automated and automated techniques. A series of available techniques are presented along with the advantages and disadvantages of each method. The descriptions of the implemented methods are included in the deliverable D2.7 ”Software Package SW2.3”. In reviewing the available change detection techniques, the focus was on ways to exploit medium resolution imagery such as Landsat due to its free-to-use license and since there is a rich historical coverage arising from this satellite series. Regarding the change detection techniques with high resolution images, this was also examined and a recovery specific change detection index is discussed in the report

    Automated Building Information Extraction and Evaluation from High-resolution Remotely Sensed Data

    Get PDF
    The two-dimensional (2D) footprints and three-dimensional (3D) structures of buildings are of great importance to city planning, natural disaster management, and virtual environmental simulation. As traditional manual methodologies for collecting 2D and 3D building information are often both time consuming and costly, automated methods are required for efficient large area mapping. It is challenging to extract building information from remotely sensed data, considering the complex nature of urban environments and their associated intricate building structures. Most 2D evaluation methods are focused on classification accuracy, while other dimensions of extraction accuracy are ignored. To assess 2D building extraction methods, a multi-criteria evaluation system has been designed. The proposed system consists of matched rate, shape similarity, and positional accuracy. Experimentation with four methods demonstrates that the proposed multi-criteria system is more comprehensive and effective, in comparison with traditional accuracy assessment metrics. Building height is critical for building 3D structure extraction. As data sources for height estimation, digital surface models (DSMs) that are derived from stereo images using existing software typically provide low accuracy results in terms of rooftop elevations. Therefore, a new image matching method is proposed by adding building footprint maps as constraints. Validation demonstrates that the proposed matching method can estimate building rooftop elevation with one third of the error encountered when using current commercial software. With an ideal input DSM, building height can be estimated by the elevation contrast inside and outside a building footprint. However, occlusions and shadows cause indistinct building edges in the DSMs generated from stereo images. Therefore, a “building-ground elevation difference model” (EDM) has been designed, which describes the trend of the elevation difference between a building and its neighbours, in order to find elevation values at bare ground. Experiments using this novel approach report that estimated building height with 1.5m residual, which out-performs conventional filtering methods. Finally, 3D buildings are digitally reconstructed and evaluated. Current 3D evaluation methods did not present the difference between 2D and 3D evaluation methods well; traditionally, wall accuracy is ignored. To address these problems, this thesis designs an evaluation system with three components: volume, surface, and point. As such, the resultant multi-criteria system provides an improved evaluation method for building reconstruction

    Tea Garden Detection from High-Resolution Imagery Using a Scene-Based Framework

    Get PDF
    Tea cultivation has a long history in China, and it is one of the pillar industries of the Chinese agricultural economy. It is therefore necessary to map tea gardens for their ongoing management. However, the previous studies have relied on fieldwork to achieve this task, which is time-consuming. In this paper, we propose a framework to map tea gardens using high-resolution remotely sensed imagery, including three scene-based methods: the bag-of-visual-words (BOVW) model, supervised latent Dirichlet allocation (sLDA), and the unsupervised convolutional neural network (UCNN). These methods can develop direct and holistic semantic representations for tea garden scenes composed of multiple sub-objects, thus they are more suitable than the traditional pixel-based or object-based methods, which focus on the local characteristics of pixels or objects. In the experiments undertaken in this study, the three different methods were tested on four datasets from Longyan (Oolong tea), Hangzhou (Longjing tea), and Puer (Puer tea). All the methods achieved a good performance, both quantitatively and visually, and the UCNN outperformed the other methods. Moreover, it was found that the addition of textural features improved the accuracy of the BOVW and sLDA models, but had no effect on the UCNN

    Image Information Mining Systems

    Get PDF

    Classification using semantic feature and machine learning: Land-use case application

    Get PDF
    Land cover classification has interested recent works especially for deforestation, urban are monitoring and agricultural land use. Traditional classification approaches have limited accuracy especially for non-heterogeneous land cover. Thus, using machine may improve the classification accuracy. The presented paper deals with the land-use scene recognition on very high-resolution remote sensing imagery. We proposed a new framework based on semantic features, handcrafted features and machine learning classifiers decisions. The method starts by semantic feature extraction using a convolutional neural network. Handcraft features are also extracted based on color and multi-resolution characteristics. Then, the classification stage is processed by three learning machine algorithms. The final classification result performed by majority vote algorithm. The idea behind is to take advantages from semantic features and handcrafted features. The second scope is to use the decision fusion to enhance the classification result. Experimentation results show that the proposed method provides good accuracy and trustable tool for land use image identification

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin
    • 

    corecore