370 research outputs found

    Dynamic post-earthquake image segmentation with an adaptive spectral-spatial descriptor

    Get PDF
    The region merging algorithm is a widely used segmentation technique for very high resolution (VHR) remote sensing images. However, the segmentation of post-earthquake VHR images is more difficult due to the complexity of these images, especially high intra-class and low inter-class variability among damage objects. Herein two key issues must be resolved: the first is to find an appropriate descriptor to measure the similarity of two adjacent regions since they exhibit high complexity among the diverse damage objects, such as landslides, debris flow, and collapsed buildings. The other is how to solve over-segmentation and under-segmentation problems, which are commonly encountered with conventional merging strategies due to their strong dependence on local information. To tackle these two issues, an adaptive dynamic region merging approach (ADRM) is introduced, which combines an adaptive spectral-spatial descriptor and a dynamic merging strategy to adapt to the changes of merging regions for successfully detecting objects scattered globally in a post-earthquake image. In the new descriptor, the spectral similarity and spatial similarity of any two adjacent regions are automatically combined to measure their similarity. Accordingly, the new descriptor offers adaptive semantic descriptions for geo-objects and thus is capable of characterizing different damage objects. Besides, in the dynamic region merging strategy, the adaptive spectral-spatial descriptor is embedded in the defined testing order and combined with graph models to construct a dynamic merging strategy. The new strategy can find the global optimal merging order and ensures that the most similar regions are merged at first. With combination of the two strategies, ADRM can identify spatially scattered objects and alleviates the phenomenon of over-segmentation and under-segmentation. The performance of ADRM has been evaluated by comparing with four state-of-the-art segmentation methods, including the fractal net evolution approach (FNEA, as implemented in the eCognition software, Trimble Inc., Westminster, CO, USA), the J-value segmentation (JSEG) method, the graph-based segmentation (GSEG) method, and the statistical region merging (SRM) approach. The experiments were conducted on six VHR subarea images captured by RGB sensors mounted on aerial platforms, which were acquired after the 2008 Wenchuan Ms 8.0 earthquake. Quantitative and qualitative assessments demonstrated that the proposed method offers high feasibility and improved accuracy in the segmentation of post-earthquake VHR aerial images

    Automatic mapping of linear woody vegetation features in agricultural landscapes using very high resolution imagery

    Get PDF
    Cataloged from PDF version of article.Automatic mapping and monitoring of agricultural landscapes using remotely sensed imagery has been an important research problem. This paper describes our work on developing automatic methods for the detection of target landscape features in very high spatial resolution images. The target objects of interest consist of linear strips of woody vegetation that include hedgerows and riparian vegetation that are important elements of the landscape ecology and biodiversity. The proposed framework exploits the spectral, textural, and shape properties of objects using hierarchical feature extraction and decision-making steps. First, a multifeature and multiscale strategy is used to be able to cover different characteristics of these objects in a wide range of landscapes. Discriminant functions trained on combinations of spectral and textural features are used to select the pixels that may belong to candidate objects. Then, a shape analysis step employs morphological top-hat transforms to locate the woody vegetation areas that fall within the width limits of an acceptable object, and a skeletonization and iterative least-squares fitting procedure quantifies the linearity of the objects using the uniformity of the estimated radii along the skeleton points. Extensive experiments using QuickBird imagery from three European Union member states show that the proposed algorithms provide good localization of the target objects in a wide range of landscapes with very different characteristics

    Automated parameterisation for multi-scale image segmentation on multiple layers

    Get PDF
    AbstractWe introduce a new automated approach to parameterising multi-scale image segmentation of multiple layers, and we implemented it as a generic tool for the eCognition® software. This approach relies on the potential of the local variance (LV) to detect scale transitions in geospatial data. The tool detects the number of layers added to a project and segments them iteratively with a multiresolution segmentation algorithm in a bottom-up approach, where the scale factor in the segmentation, namely, the scale parameter (SP), increases with a constant increment. The average LV value of the objects in all of the layers is computed and serves as a condition for stopping the iterations: when a scale level records an LV value that is equal to or lower than the previous value, the iteration ends, and the objects segmented in the previous level are retained. Three orders of magnitude of SP lags produce a corresponding number of scale levels. Tests on very high resolution imagery provided satisfactory results for generic applicability. The tool has a significant potential for enabling objectivity and automation of GEOBIA analysis

    Advances in Hyperspectral Image Classification Methods for Vegetation and Agricultural Cropland Studies

    Get PDF
    Hyperspectral data are becoming more widely available via sensors on airborne and unmanned aerial vehicle (UAV) platforms, as well as proximal platforms. While space-based hyperspectral data continue to be limited in availability, multiple spaceborne Earth-observing missions on traditional platforms are scheduled for launch, and companies are experimenting with small satellites for constellations to observe the Earth, as well as for planetary missions. Land cover mapping via classification is one of the most important applications of hyperspectral remote sensing and will increase in significance as time series of imagery are more readily available. However, while the narrow bands of hyperspectral data provide new opportunities for chemistry-based modeling and mapping, challenges remain. Hyperspectral data are high dimensional, and many bands are highly correlated or irrelevant for a given classification problem. For supervised classification methods, the quantity of training data is typically limited relative to the dimension of the input space. The resulting Hughes phenomenon, often referred to as the curse of dimensionality, increases potential for unstable parameter estimates, overfitting, and poor generalization of classifiers. This is particularly problematic for parametric approaches such as Gaussian maximum likelihoodbased classifiers that have been the backbone of pixel-based multispectral classification methods. This issue has motivated investigation of alternatives, including regularization of the class covariance matrices, ensembles of weak classifiers, development of feature selection and extraction methods, adoption of nonparametric classifiers, and exploration of methods to exploit unlabeled samples via semi-supervised and active learning. Data sets are also quite large, motivating computationally efficient algorithms and implementations. This chapter provides an overview of the recent advances in classification methods for mapping vegetation using hyperspectral data. Three data sets that are used in the hyperspectral classification literature (e.g., Botswana Hyperion satellite data and AVIRIS airborne data over both Kennedy Space Center and Indian Pines) are described in Section 3.2 and used to illustrate methods described in the chapter. An additional high-resolution hyperspectral data set acquired by a SpecTIR sensor on an airborne platform over the Indian Pines area is included to exemplify the use of new deep learning approaches, and a multiplatform example of airborne hyperspectral data is provided to demonstrate transfer learning in hyperspectral image classification. Classical approaches for supervised and unsupervised feature selection and extraction are reviewed in Section 3.3. In particular, nonlinearities exhibited in hyperspectral imagery have motivated development of nonlinear feature extraction methods in manifold learning, which are outlined in Section 3.3.1.4. Spatial context is also important in classification of both natural vegetation with complex textural patterns and large agricultural fields with significant local variability within fields. Approaches to exploit spatial features at both the pixel level (e.g., co-occurrencebased texture and extended morphological attribute profiles [EMAPs]) and integration of segmentation approaches (e.g., HSeg) are discussed in this context in Section 3.3.2. Recently, classification methods that leverage nonparametric methods originating in the machine learning community have grown in popularity. An overview of both widely used and newly emerging approaches, including support vector machines (SVMs), Gaussian mixture models, and deep learning based on convolutional neural networks is provided in Section 3.4. Strategies to exploit unlabeled samples, including active learning and metric learning, which combine feature extraction and augmentation of the pool of training samples in an active learning framework, are outlined in Section 3.5. Integration of image segmentation with classification to accommodate spatial coherence typically observed in vegetation is also explored, including as an integrated active learning system. Exploitation of multisensor strategies for augmenting the pool of training samples is investigated via a transfer learning framework in Section 3.5.1.2. Finally, we look to the future, considering opportunities soon to be provided by new paradigms, as hyperspectral sensing is becoming common at multiple scales from ground-based and airborne autonomous vehicles to manned aircraft and space-based platforms

    Automatic detection of geospatial objects using multiple hierarchical segmentations

    Get PDF
    Cataloged from PDF version of article.The object-based analysis of remotely sensed imagery provides valuable spatial and structural information that is complementary to pixel-based spectral information in classi- fication. In this paper, we present novel methods for automatic object detection in high-resolution images by combining spectral information with structural information exploited by using image segmentation. The proposed segmentation algorithm uses morphological operations applied to individual spectral bands using structuring elements in increasing sizes. These operations produce a set of connected components forming a hierarchy of segments for each band. A generic algorithm is designed to select meaningful segments that maximize a measure consisting of spectral homogeneity and neighborhood connectivity. Given the observation that different structures appear more clearly at different scales in different spectral bands, we describe a new algorithm for unsupervised grouping of candidate segments belonging to multiple hierarchical segmentations to find coherent sets of segments that correspond to actual objects. The segments are modeled by using their spectral and textural content, and the grouping problem is solved by using the probabilistic latent semantic analysis algorithm that builds object models by learning the object-conditional probability distributions. The automatic labeling of a segment is done by computing the similarity of its feature distribution to the distribution of the learned object models using the Kullback–Leibler divergence. The performances of the unsupervised segmentation and object detection algorithms are evaluated qualitatively and quantitatively using three different data sets with comparative experiments, and the results show that the proposed methods are able to automatically detect, group, and label segments belonging to the same object classes

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Markov rasgele alanları aracılığı ile anlam bilgisi ve imge bölütlemenin birleştirilmesi.

    Get PDF
    The formulation of image segmentation problem is evolved considerably, from the early years of computer vision in 1970s to these years, in 2010s. While the initial studies offer mostly unsupervised approaches, a great deal of recent studies shift towards the supervised solutions. This is due to the advancements in the cognitive science and its influence on the computer vision research. Also, accelerated availability of computational power enables the researchers to develop complex algorithms. Despite the great effort on the image segmentation research, the state of the art techniques still fall short to satisfy the need of the further processing steps of computer vision. This study is another attempt to generate a “substantially complete” segmentation output for the consumption of object classification, recognition and detection steps. Our approach is to fuse the multiple segmentation outputs in order to achieve the “best” result with respect to a cost function. The proposed approach, called Boosted-MRF, elegantly formulates the segmentation fusion problem as a Markov Random Fields (MRF) model in an unsupervised framework. For this purpose, a set of initial segmentation outputs is obtained and the consensus among the segmentation partitions are formulated in the energy function of the Markov Random Fields model. Finally, minimization of the energy function yields the “best” consensus among the segmentation ensemble. We proceed one step further to improve the performance of the Boosted-MRF by introducing some auxiliary domain information into the segmentation fusion process. This enhanced segmentation fusion method, called the Domain Specific MRF, updates the energy function of the MRF model by the available information which is received from a domain expert. For this purpose, a top-down segmentation method is employed to obtain a set of Domain Specific Segmentation Maps which are incomplete segmentations of a given image. Therefore, in this second segmentation fusion method, in addition to the set of bottom-up segmentation ensemble, we generate ensemble of top-down Domain Specific Segmentation Maps. Based on the bottom–up and top down segmentation ensembles a new MRF energy function is defined. Minimization of this energy function yields the “best” consensus which is consistent with the domain specific information. The experiments performed on various datasets show that the proposed segmentation fusion methods improve the performances of the segmentation outputs in the ensemble measured with various indexes, such as Probabilistic Rand Index, Mutual Information. The Boosted-MRF method is also compared to a popular segmentation fusion method, namely, Best of K. The Boosted-MRF is slightly better than the Best of K method. The suggested Domain Specific-MRF method is applied on a set of outdoor images with vegetation where vegetation information is utilized as domain specific information. A slight improvement in the performance is recorded in this experiment. The method is also applied on remotely sensed dataset of building images, where more advanced domain specific information is available. The segmentation performance is evaluated with a performance measure which is specifically defined to estimate the segmentation performance for building images. In these two experiments with the Domain Specific-MRF method, it is observed that, as long as reliable domain specific information is available, the segmentation performance improves significantly.Ph.D. - Doctoral Progra

    Automatic detection and segmentation of orchards using very high-resolution imagery

    Get PDF
    Cataloged from PDF version of article.Spectral information alone is often not sufficient to distinguish certain terrain classes such as permanent crops like orchards, vineyards, and olive groves from other types of vegetation. However, instances of these classes possess distinctive spatial structures that can be observable in detail in very high spatial resolution images. This paper proposes a novel unsupervised algorithm for the detection and segmentation of orchards. The detection step uses a texture model that is based on the idea that textures are made up of primitives (trees) appearing in a near-regular repetitive arrangement (planting patterns). The algorithm starts with the enhancement of potential tree locations by using multi-granularity isotropic filters. Then, the regularity of the planting patterns is quantified using projection profiles of the filter responses at multiple orientations. The result is a regularity score at each pixel for each granularity and orientation. Finally, the segmentation step iteratively merges neighboring pixels and regions belonging to similar planting patterns according to the similarities of their regularity scores and obtains the boundaries of individual orchards along with estimates of their granularities and orientations. Extensive experiments using Ikonos and QuickBird imagery as well as images taken from Google Earth show that the proposed algorithm provides good localization of the target objects even when no sharp boundaries exist in the image data. © 2012 IEEE

    Unsupervised Texture Segmentation

    Get PDF
    corecore