2,805 research outputs found

    An Efficient Algorithm for Earth Surface Interpretation from Satellite Imagery

    Get PDF
    Many image segmentation algorithms are available but most of them are not fit for interpretation of satellite images. Mean-shift algorithm has been used in many recent researches as a promising image segmentation technique, which has the speed at O(kn2) where n is the number of data points and k is the number of average iteration steps for each data point. This method computes using a brute-force in the iteration of a pixel to compare with the region it is in. This paper proposes a novel algorithm named First-order Neighborhood Mean-shift (FNM) segmentation, which is enhanced from Mean-shift segmentation. This algorithm provides information about the relationship of a pixel with its neighbors; and makes them fall into the same region which improve the speed to O(kn). In this experiment, FNM were compared to well-known algorithms, i.e., K-mean (KM), Constrained K-mean (CKM), Adaptive K-mean (AKM), Fuzzy C-mean (FCM) and Mean-shift (MS) using the reference map from Landsat. FNM provided better results in terms of overall error and correctness criteria

    CloudFCN: Accurate and robust cloud detection for satellite imagery with deep learning

    Get PDF
    Cloud masking is of central importance to the Earth Observation community. This paper deals with the problem of detecting clouds in visible and multispectral imagery from high-resolution satellite cameras. Recently, Machine Learning has offered promising solutions to the problem of cloud masking, allowing for more flexibility than traditional thresholding techniques, which are restricted to instruments with the requisite spectral bands. However, few studies use multi-scale features (as in, a combination of pixel-level and spatial) whilst also offering compelling experimental evidence for real-world performance. Therefore, we introduce CloudFCN, based on a Fully Convolutional Network architecture, known as U-net, which has become a standard Deep Learning approach to image segmentation. It fuses the shallowest and deepest layers of the network, thus routing low-level visible content to its deepest layers. We offer an extensive range of experiments on this, including data from two high-resolution sensors-Carbonite-2 and Landsat 8-and several complementary tests. Owing to a variety of performance-enhancing design choices and training techniques, it exhibits state-of-the-art performance where comparable to other methods, high speed, and robustness to many different terrains and sensor types

    Machine learning classification and accuracy assessment from high-resolution images of coastal wetlands

    Get PDF
    High-resolution images obtained by multispectral cameras mounted on Unmanned Aerial Vehicles (UAVs) are helping to capture the heterogeneity of the environment in images that can be discretized in categories during a classification process. Currently, there is an increasing use of supervised machine learning (ML) classifiers to retrieve accurate results using scarce datasets with samples with non-linear relationships. We compared the accuracies of two ML classifiers using a pixel and object analysis approach in six coastal wetland sites. The results show that the Random Forest (RF) performs better than K-Nearest Neighbors (KNN) algorithm in the classification of pixels and objects and the classification based on pixel analysis is slightly better than the object-based analysis. The agreement between the classifications of objects and pixels is higher in Random Forest. This is likely due to the heterogeneity of the study areas, where pixel-based classifications are most appropriate. In addition, from an ecological perspective, as these wetlands are heterogeneous, the pixel-based classification reflects a more realistic interpretation of plant community distribution

    Image segmentation with adaptive region growing based on a polynomial surface model

    Get PDF
    A new method for segmenting intensity images into smooth surface segments is presented. The main idea is to divide the image into flat, planar, convex, concave, and saddle patches that coincide as well as possible with meaningful object features in the image. Therefore, we propose an adaptive region growing algorithm based on low-degree polynomial fitting. The algorithm uses a new adaptive thresholding technique with the L∞ fitting cost as a segmentation criterion. The polynomial degree and the fitting error are automatically adapted during the region growing process. The main contribution is that the algorithm detects outliers and edges, distinguishes between strong and smooth intensity transitions and finds surface segments that are bent in a certain way. As a result, the surface segments corresponding to meaningful object features and the contours separating the surface segments coincide with real-image object edges. Moreover, the curvature-based surface shape information facilitates many tasks in image analysis, such as object recognition performed on the polynomial representation. The polynomial representation provides good image approximation while preserving all the necessary details of the objects in the reconstructed images. The method outperforms existing techniques when segmenting images of objects with diffuse reflecting surfaces

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Monitoring urban green space (UGS) changes by using high resolution aerial imagery: a case study of Kuala Lumpur, Malaysia

    Get PDF
    Urban green space (UGS) in a city is the foundation of natural productivity in an urban structure. It is also known as a natural cooling device that plays a vital role in the city as an urban lung, discharging oxygen to reduce the city heat and as a wall against harmful air pollution. When urbanization happens, UGS, including the gazetted areas, is essentially converted into an artificial surface due to the population’s demand for new development. Therefore, identifying its significance is a must and beneficial to explore. The purpose of this study is to identify the 10 years of UGS change patterns and analyze the UGS loss, particularly in the affected gazetted zone. The study used available aerial imagery data for 2002, 2012, and 2017, and database record of green space. The study had classified UGS by using the Support Vector Machine (SVM) algorithm. The training area was determined by visual interpretation and aided by a land use planning map as reference. The result validity was then determined by kappa coefficient value and producer accuracy. Overall, the study showed that the city had lost its UGS by about 88% and the total gain in built up area was 114%. The loss in UGS size in the city could be compared to a total of 2,843 units of football fields, transformed forever in just 10 years. The uncontrolled development and lack of advanced monitoring mechanism had negatively affected the planning structure of green space in KL. The implementation of advance technology as a new mitigation tool to monitor green space loss in the city could provide a variety of enhanced information that could assist city planners and urban designers to defend decisions in protecting these valuable UGS

    Delineation of Open-Pit Mining Boundaries on Multispectral Imagery

    Get PDF
    During the last decades, monitoring the spatial growth of open-pit mining areas has become a common procedure in an effort to comprehend the influence that mining activities have on the adjacent land-use/land-cover types. Various case studies have been presented, focusing on land-cover mapping of complex mining landscapes. They highlight that a rapid as well as accurate approach is critical. This paper presents a methodological framework for a rapid delineation of open-pit mining area boundaries. For that purpose an Object-Based Image Analysis (OBIA) methodology is implemented. Sentinel-2 data were obtained and the Mean-Shift segmentation algorithm was employed. Among the many methods that have been presented in literature in order to evaluate the performance of an image segmentation, an unsupervised approach is carried out. A quantitative evaluation of segmentation accuracy leads to a more targeted selection of segmentation parameter values and as a consequence is of utmost importance. The proposed methodology was mainly conducted through python scripts and may constitute a guide for relevant studies
    corecore