1,508 research outputs found

    Classification accuracy increase using multisensor data fusion

    Get PDF
    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc

    Techniques for automatic large scale change analysis of temporal multispectral imagery

    Get PDF
    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst\u27s job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change

    Alignment of Hyperspectral Images Using KAZE Features

    Get PDF
    Image registration is a common operation in any type of image processing, specially in remote sensing images. Since the publication of the scale–invariant feature transform (SIFT) method, several algorithms based on feature detection have been proposed. In particular, KAZE builds the scale space using a nonlinear diffusion filter instead of Gaussian filters. Nonlinear diffusion filtering allows applying a controlled blur while the important structures of the image are preserved. Hyperspectral images contain a large amount of spatial and spectral information that can be used to perform a more accurate registration. This article presents HSI–KAZE, a method to register hyperspectral remote sensing images based on KAZE but considering the spectral information. The proposed method combines the information of a set of preselected bands, and it adapts the keypoint descriptor and the matching stage to take into account the spectral information. The method is adequate to register images in extreme situations in which the scale between them is very different. The effectiveness of the proposed algorithm has been tested on real images taken on different dates, and presenting different types of changes. The experimental results show that the method is robust achieving image registrations with scales of up to 24.0×This research was supported in part by the Consellería de Cultura, Educación e Ordenación Universitaria, Xunta de Galicia [grant numbers GRC2014/008 and ED431G/08] and Ministerio de Educación, Cultura y Deporte [grant number TIN2016-76373-P] both are co–funded by the European Regional Development Fund. The work of Álvaro Ordóñez was supported by the Ministerio de Educación, Cultura y Deporte under an FPU Grant [grant number FPU16/03537]. This work was also partially supported by Consejería de Educación, Junta de Castilla y León (PROPHET Project) [grant number VA082P17]S

    WEIGHTED ICP POINT CLOUDS REGISTRATION BY SEGMENTATION BASED ON EIGENFEATURES CLUSTERING

    Get PDF
    Abstract. Dense point clouds can be nowadays considered the main product of UAV (Unmanned Aerial Vehicle) photogrammetric processing and clouds registration is still a key aspect in case of blocks acquired apart. In the paper some overlapping datasets, acquired with a multispectral Parrot Sequoia camera above some rice fields, are analysed in a single block approach. Since the sensors is equipped with a navigation-grade sensor, the georeferencing information is affected by large errors and the so obtained dense point clouds are significantly far apart: to register them the Iterative Closes Point (ICP) technique is applied. ICP convergence is fundamentally based on the correct selection of the points to be coupled, and the paper proposes an innovative procedure in which a double density points subset is selected in relation to terrain characteristics. This approach reduces the complexity of the calculation and avoids that flat terrain parts, where most of the original points, are de-facto overweighed. Starting from the original dense cloud, eigenfeatures are extracted for each point and clustering is then performed to group them in two classes connected to terrain geometry, flat terrain or not; two metrics are adopted and compared for k-means clustering, Euclidean and City Block. Segmentation results are evaluated visually and by comparison with manually performed classification; ICP are then performed and the quality of registration is assessed too. The presented results show how the proposed procedure seem capable to register clouds even far apart with a good overall accuracy

    Optimized spectral filter design enables more accurate estimation of oxygen saturation in spectral imaging

    Get PDF
    Oxygen saturation (SO2) in tissue is a crucially important physiological parameter with ubiquitous clinical utility in diagnosis, treatment, and monitoring, as well as widespread use as an invaluable preclinical research tool. Multispectral imaging can be used to visualize SO2 non-invasively, non-destructively and without contact in real-time using narrow spectral filter sets, but typically, these spectral filter sets are poorly suited to a specific clinical task, application, or tissue type. In this work, we demonstrate the merit of optimizing spectral filter sets for more accurate estimation of SO2. Using tissue modelling and simulated multispectral imaging, we demonstrate filter optimization reduces the root-mean-square-error (RMSE) in estimating SO2 by up to 37% compared with evenly spaced filters. Moreover, we demonstrate up to a 79% decrease in RMSE for optimized filter sets compared with filter sets chosen to minimize mutual information. Wider adoption of this approach will result in more effective multispectral imaging systems that can address specific clinical needs and consequently, more widespread adoption of multispectral imaging technologies in disease diagnosis and treatment

    Dimensionality reduction and hierarchical clustering in framework for hyperspectral image segmentation

    Get PDF
    The hyperspectral data contains hundreds of narrows bands representing the same scene on earth, with each pixel has a continuous reflectance spectrum. The first attempts to analysehyperspectral images were based on techniques that were developed for multispectral images by randomly selecting few spectral channels, usually less than seven. This random selection of bands degrades the performance of segmentation algorithm on hyperspectraldatain terms of accuracies. In this paper, a new framework is designed for the analysis of hyperspectral image by taking the information from all the data channels with dimensionality reduction method using subset selection and hierarchical clustering. A methodology based on subset construction is used for selecting k informative bands from d bands dataset. In this selection, similarity metrics such as Average Pixel Intensity [API], Histogram Similarity [HS], Mutual Information [MI] and Correlation Similarity [CS] are used to create k distinct subsets and from each subset, a single band is selected. The informative bands which are selected are merged into a single image using hierarchical fusion technique. After getting fused image, Hierarchical clustering algorithm is used for segmentation of image. The qualitative and quantitative analysis shows that CS similarity metric in dimensionality reduction algorithm gets high quality segmented image

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Multilayer Complex Network Descriptors for Color-Texture Characterization

    Full text link
    A new method based on complex networks is proposed for color-texture analysis. The proposal consists on modeling the image as a multilayer complex network where each color channel is a layer, and each pixel (in each color channel) is represented as a network vertex. The network dynamic evolution is accessed using a set of modeling parameters (radii and thresholds), and new characterization techniques are introduced to capt information regarding within and between color channel spatial interaction. An automatic and adaptive approach for threshold selection is also proposed. We conduct classification experiments on 5 well-known datasets: Vistex, Usptex, Outex13, CURet and MBT. Results among various literature methods are compared, including deep convolutional neural networks with pre-trained architectures. The proposed method presented the highest overall performance over the 5 datasets, with 97.7 of mean accuracy against 97.0 achieved by the ResNet convolutional neural network with 50 layers.Comment: 20 pages, 7 figures and 4 table
    corecore