2,176 research outputs found

    Enrichment Procedures for Soft Clusters: A Statistical Test and its Applications

    Get PDF
    Clusters, typically mined by modeling locality of attribute spaces, are often evaluated for their ability to demonstrate ‘enrichment’ of categorical features. A cluster enrichment procedure evaluates the membership of a cluster for significant representation in pre-defined categories of interest. While classical enrichment procedures assume a hard clustering definition, in this paper we introduce a new statistical test that computes enrichments for soft clusters. We demonstrate an application of this test in refining and evaluating soft clusters for classification of remotely sensed images

    A Survey on: Hyper Spectral Image Segmentation and Classification Using FODPSO

    Get PDF
    The Spatial analysis of image sensed and captured from a satellite provides less accurate information about a remote location. Hence analyzing spectral becomes essential. Hyper spectral images are one of the remotely sensed images, they are superior to multispectral images in providing spectral information. Detection of target is one of the significant requirements in many are assuc has military, agriculture etc. This paper gives the analysis of hyper spectral image segmentation using fuzzy C-Mean (FCM)clustering technique with FODPSO classifier algorithm. The 2D adaptive log filter is proposed to denoise the sensed and captured hyper spectral image in order to remove the speckle noise

    Methods for Multisource Data Analysis in Remote Sensing

    Get PDF
    Methods for classifying remotely sensed data from multiple data sources are considered. Special interest is in general methods for multisource classification and three such approaches are considered: Dempster-Shafer theory, fuzzy set theory and statistical multisource analysis. Statistical multisource analysis is investigated further. To apply this method successfully it is necessary to characterize the reliability of each data source. Separability measures and classification accuracy are used to measure the reliability. These reliability measures are then associated with reliability factors included in the statistical multisource analysis. Experimental results are given for the application of statistical multisource analysis to multispectral scanner data where different segments of the electromagnetic spectrum are treated as different sources. Finally, a discussion is included concerning future directions for investigating reliability measures

    Continuous Iterative Guided Spectral Class Rejection Classification Algorithm: Part 1

    Get PDF
    This paper outlines the changes necessary to convert the iterative guided spectral class rejection (IGSCR) classification algorithm to a soft classification algorithm. IGSCR uses a hypothesis test to select clusters to use in classification and iteratively refines clusters not yet selected for classification. Both steps assume that cluster and class memberships are crisp (either zero or one). In order to make soft cluster and class assignments (between zero and one), a new hypothesis test and iterative refinement technique are introduced that are suitable for soft clusters. The new hypothesis test, called the (class) association significance test, is based on the normal distribution, and a proof is supplied to show that the assumption of normality is reasonable. Soft clusters are iteratively refined by creating new clusters using information contained in a targeted soft cluster. Soft cluster evaluation and refinement can then be combined to form a soft classification algorithm, continuous iterative guided spectral class rejection (CIGSCR)

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Operational large-scale segmentation of imagery based on iterative elimination

    Get PDF
    Image classification and interpretation are greatly aided through the use of image segmentation. Within the field of environmental remote sensing, image segmentation aims to identify regions of unique or dominant ground cover from their attributes such as spectral signature, texture and context. However, many approaches are not scalable for national mapping programmes due to limits in the size of images that can be processed. Therefore, we present a scalable segmentation algorithm, which is seeded using k-means and provides support for a minimum mapping unit through an innovative iterative elimination process. The algorithm has also been demonstrated for the segmentation of time series datasets capturing both the intra-image variation and change regions. The quality of the segmentation results was assessed by comparison with reference segments along with statistics on the inter- and intra-segment spectral variation. The technique is computationally scalable and is being actively used within the national land cover mapping programme for New Zealand. Additionally, 30-m continental mosaics of Landsat and ALOS-PALSAR have been segmented for Australia in support of national forest height and cover mapping. The algorithm has also been made freely available within the open source Remote Sensing and GIS software Library (RSGISLib)
    corecore