15 research outputs found

    Processing of Hyperspectral Data using Wavelet Transform

    Get PDF
    Remote sensor technology has encouraged series of research work in the area of signal and image processing. This is because the application of remote sensor has made it possible to obtain different types of signals and images from different places all over the world. In most cases, data obtained from hyperspectral images are found to be too voluminous and noisy. This, to a certain extent affects the accuracy of the result obtained when such signals or images are further processed for some applications. Previous research works have not sufficiently addressed this fundamental problem. Therefore, this research work is out to make use of Wavelet Transform for  processing signals obtained from hyperspectral images with a view to denoise and reduce the data dimensionality without losing part of its content. Having undergone the process of denoising, the quality of the image or signal is drastically improved in terms of its clarity and size. This produces a better result when such signal is used for some applications. The system was implemented using MatLab wavelet tool. Hence, the result obtained is found to be better than the previous ones. The result also produced an hyperspectral spectrum/signal that has been thoroughly denoised and dimensionally reduced to an acceptable size within a very short computational time

    Automated Image Registration Using Morphological Region of Interest Feature Extraction

    Get PDF
    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching

    Entropy Based Determination of Optimal Principal Components of Airborne Prism Experiment (APEX) Imaging Spectrometer Data for Improved Land Cover Classification

    Get PDF
    Hyperspectral data finds applications in the domain of remote sensing. However, with the increase in amounts of information and advantages associated, come the "curse" of dimensionality and additional computational load. The question most often remains as to which subset of the data best represents the information in the imagery. The present work is an attempt to establish entropy, a statistical measure for quantifying uncertainty, as a formidable measure for determining the optimal number of principal components (PCs) for improved identification of land cover classes. Feature extraction from the Airborne Prism EXperiment (APEX) data was achieved utilizing Principal Component Analysis (PCA). However, determination of optimal number of PCs is vital as addition of computational load to the classification algorithm with no significant improvement in accuracy can be avoided. Considering the soft classification approach applied in this work, entropy results are to be analyzed. Comparison of these entropy measures with traditional accuracy assessment of the corresponding „hardened‟ outputs showed results in the affirmative of the objective. The present work concentrates on entropy being utilized for optimal feature extraction for pre-processing before further analysis, rather than the analysis of accuracy obtained from principal component analysis and possibilistic c-means classification. Results show that 7 PCs of the APEX dataset would be the optimal choice, as they show lower entropy and higher accuracy, along with better identification compared to other combinations while utilizing the APEX dataset

    BLUR KERNEL’S EFFECT ON PERFORMANCE OF SINGLE-FRAME SUPER-RESOLUTION ALGORITHMS FOR SPATIALLY ENHANCING HYPERION AND PRISMA DATA

    Get PDF
    Single-frame super-resolution (SFSR) achieves the goal of generating a high-resolution image from a single low-resolution input in a three-step process, namely, noise removal, up-sampling and deblurring. Scale factor and blur kernel are essential parameters of the up-sampling and deblurring steps. Few studies document the impact of these parameters on the performance of SFSR algorithms for improving the spatial resolution of real-world remotely-sensed datasets. Here, the effect of changing blur kernel has been studied on the behaviour of two classic SFSR algorithms: iterative back projection (IBP) and gaussian process regression (GPR), which are applied to two spaceborne hyperspectral datasets for scale factors 2, 3 and 4. Eight full-reference image quality metrics and algorithm processing time are deployed for this purpose. A literature-based re-interpretation of Wald’s reduced resolution protocol has also been used in this work for choosing the reference image. Intensive intra-algorithm comparisons of various simulation scenarios reveal each algorithm’s best performing Gaussian blur kernel parameters. Inter-algorithm comparison shows the better performing algorithm out of the two, thereby paving the way for further research in SFSR of remotely-sensed images

    Calibration and segmentation of skin areas in hyperspectral imaging for the needs of dermatology

    Get PDF
    Introduction: Among the currently known imaging methods, there exists hyperspectral imaging. This imaging fills the gap in visible light imaging with conventional, known devices that use classical CCDs. A major problem in the study of the skin is its segmentation and proper calibration of the results obtained. For this purpose, a dedicated automatic image analysis algorithm is proposed by the paper's authors. Material and method: The developed algorithm was tested on data acquired with the Specim camera. Images were related to different body areas of healthy patients. The resulting data were anonymized and stored in the output format, source dat (ENVI File) and raw. The frequency. of the data obtained ranged from 397 to 1030 nm. Each image was recorded every 0.79 nm, which in total gave 800 2D images for each subject. A total of 36' 000 2D images in dat format and the same number of images in the raw format were obtained for 45 full hyperspectral measurement sessions. As part of the paper, an image analysis algorithm using known analysis methods as well as new ones developed by the authors was proposed. Among others, filtration with a median filter, the Canny filter, conditional opening and closing operations and spectral analysis were used. The algorithm was implemented in Matlab and C and is used in practice. Results: The proposed method enables accurate segmentation for 36' 000 measured 2D images at the level of 7.8%. Segmentation is carried out fully automatically based on the reference ray spectrum. In addition, brightness calibration of individual 2D images is performed for the subsequent wavelengths. For a few segmented areas, the analysis time using Intel Core i5 CPU RAM [email protected] 4GB does not exceed 10 s. Conclusions: The obtained results confirm the usefulness of the applied method for image analysis and processing in dermatological practice. In particular, it is useful in the quantitative evaluation of skin lesions. Such analysis can be performed fully automatically without operator's intervention

    Extended Averaged Learning Subspace Method for Hyperspectral Data Classification

    Get PDF
    Averaged learning subspace methods (ALSM) have the advantage of being easily implemented and appear to outperform in classification problems of hyperspectral images. However, there remain some open and challenging problems, which if addressed, could further improve their performance in terms of classification accuracy. We carried out experiments mainly by using two kinds of improved subspace methods (namely, dynamic and fixed subspace methods), in conjunction with the [0,1] and [-1,+1] normalization methods. We used different performance indicators to support our experimental studies: classification accuracy, computation time, and the stability of the parameter settings. Results are presented for the AVIRIS Indian Pines data set. Experimental analysis showed that the fixed subspace method combined with the [0,1] normalization method yielded higher classification accuracy than other subspace methods. Moreover, ALSMs are easily applied: only two parameters need to be set, and they can be applied directly to hyperspectral data. In addition, they can completely identify training samples in a finite number of iterations

    Multi-Channel Morphological Profiles for Classification of Hyperspectral Images Using Support Vector Machines

    Get PDF
    Hyperspectral imaging is a new remote sensing technique that generates hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. Supervised classification of hyperspectral image data sets is a challenging problem due to the limited availability of training samples (which are very difficult and costly to obtain in practice) and the extremely high dimensionality of the data. In this paper, we explore the use of multi-channel morphological profiles for feature extraction prior to classification of remotely sensed hyperspectral data sets using support vector machines (SVMs). In order to introduce multi-channel morphological transformations, which rely on ordering of pixel vectors in multidimensional space, several vector ordering strategies are investigated. A reduced implementation which builds the multi-channel morphological profile based on the first components resulting from a dimensional reduction transformation applied to the input data is also proposed. Our experimental results, conducted using three representative hyperspectral data sets collected by NASA's Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) sensor and the German Digital Airborne Imaging Spectrometer (DAIS 7915), reveal that multi-channel morphological profiles can improve single-channel morphological profiles in the task of extracting relevant features for classification of hyperspectral data using small training sets

    Utilizing Hierarchical Segmentation to Generate Water and Snow Masks to Facilitate Monitoring Change with Remotely Sensed Image Data

    Get PDF
    The hierarchical segmentation (HSEG) algorithm is a hybrid of hierarchical step-wise optimization and constrained spectral clustering that produces a hierarchical set of image segmentations. This segmentation hierarchy organizes image data in a manner that makes the image's information content more accessible for analysis by enabling region-based analysis. This paper discusses data analysis with HSEG and describes several measures of region characteristics that may be useful analyzing segmentation hierarchies for various applications. Segmentation hierarchy analysis for generating landwater and snow/ice masks from MODIS (Moderate Resolution Imaging Spectroradiometer) data was demonstrated and compared with the corresponding MODIS standard products. The masks based on HSEG segmentation hierarchies compare very favorably to the MODIS standard products. Further, the HSEG based landwater mask was specifically tailored to the MODIS data and the HSEG snow/ice mask did not require the setting of a critical threshold as required in the production of the corresponding MODIS standard product
    corecore