334 research outputs found

    Unsupervised Classification of Hyperspectral Images based on Spectral Features

    Get PDF
    In this world of Big Data, large quantities of data are been created everyday from all the type of visual sensors available in the hands of mankind. One important data is that we obtain from satellite of the land image. The application of these data are numerous. They have been used in classification of land regions, change detection of an area over a period of time, detecting different anomalies in the area and so on. As data is increasing at a high rate, so manually doing these jobs is not a good idea. So, we have to apply automated algorithms to solve these jobs. The images we see generally consists of visible light in Red, Green and Blue bands, but light of different wavelength differ in their properties of passing obstacle. So, there has been considerable research going on continuous spectra images. These images are called Hyper-spectral Image. In this thesis, I have gone through many classic machine learning algorithms like K-means, Expectation Maximization, Hierarchical Clustering, some out of box methods like Unsupervised Artificial DNA Classifier, Spatial Spectral Information which integrates both features to get better classification and a variant of Maximal Margin Clustering which uses K-Nearest Neighbor algorithm to cross validate and get the best set to separate. Sometimes PCA is used get best features from the dataset. Finally all the results are compare

    A REVIEW ON MULTIPLE-FEATURE-BASED ADAPTIVE SPARSE REPRESENTATION (MFASR) AND OTHER CLASSIFICATION TYPES

    Get PDF
    A new technique Multiple-feature-based adaptive sparse representation (MFASR) has been demonstrated for Hyperspectral Images (HSI's) classification. This method involves mainly in four steps at the various stages. The spectral and spatial information reflected from the original Hyperspectral Images with four various features. A shape adaptive (SA) spatial region is obtained in each pixel region at the second step. The algorithm namely sparse representation has applied to get the coefficients of sparse for each shape adaptive region in the form of matrix with multiple features. For each test pixel, the class label is determined with the help of obtained coefficients. The performances of MFASR have much better classification results than other classifiers in the terms of quantitative and qualitative percentage of results. This MFASR will make benefit of strong correlations that are obtained from different extracted features and this make use of effective features and effective adaptive sparse representation. Thus, the very high classification performance was achieved through this MFASR technique

    Superpixel nonlocal weighting joint sparse representation for hyperspectral image classification.

    Get PDF
    Joint sparse representation classification (JSRC) is a representative spectral–spatial classifier for hyperspectral images (HSIs). However, the JSRC is inappropriate for highly heterogeneous areas due to the spatial information being extracted from a fixed-sized neighborhood block, which is often unable to conform to the naturally irregular structure of land cover. To address this problem, a superpixel-based JSRC with nonlocal weighting, i.e., superpixel-based nonlocal weighted JSRC (SNLW-JSRC), is proposed in this paper. In SNLW-JSRC, the superpixel representation of an HSI is first constructed based on an entropy rate segmentation method. This strategy forms homogeneous neighborhoods with naturally irregular structures and alleviates the inclusion of pixels from different classes in the process of spatial information extraction. Afterwards, the superpixel-based nonlocal weighting (SNLW) scheme is built to weigh the superpixel based on its structural and spectral information. In this way, the weight of one specific neighboring pixel is determined by the local structural similarity between the neighboring pixel and the central test pixel. Then, the obtained local weights are used to generate the weighted mean data for each superpixel. Finally, JSRC is used to produce the superpixel-level classification. This speeds up the sparse representation and makes the spatial content more centralized and compact. To verify the proposed SNLW-JSRC method, we conducted experiments on four benchmark hyperspectral datasets, namely Indian Pines, Pavia University, Salinas, and DFC2013. The experimental results suggest that the SNLW-JSRC can achieve better classification results than the other four SRC-based algorithms and the classical support vector machine algorithm. Moreover, the SNLW-JSRC can also outperform the other SRC-based algorithms, even with a small number of training samples

    COMPARISON OF SUPERVISED CLASSIFICATION TECHNIQUES WITH ALOS PALSAR SENSOR FOR ROORKEE REGION OF UTTARAKHAND, INDIA

    Get PDF
    The Advanced Land Observing Satellite (ALOS) is developed by the Japanese Aerospace Exploration Agency (JAXA) which was launched in the year 2006 for the Earth observation and exploration purpose. The ALOS was carrying PRISM, AVNIR-2 and PALSAR sensors for this purpose. PALSAR is L-Band synthetic aperture radar (SAR). The PALSAR sensor is designed in a way that it can work in all weather conditions with a resolution of 10 meters. In this research work we have made an investigation on the accuracy obtained from the various supervised classification techniques. We have compared the accuracy obtained by classifying the ALOS PALSAR data of the Roorkee region of Uttarakhand, India. The training ROI’S (Region of Interest) are created manually with the assistance of ArcGIS Earth and for the testing purpose, we have used the Global positioning system (GPS) coordinates of the region. Supervised classification techniques included in this comparison are Parallelepiped classification (PC), Minimum distance classification (MDC), Mahalanobis distance classification (MaDC), Maximum likelihood classification (MLC), Spectral angle mapper (SAM), Spectral information divergence (SID) and Support vector machine (SVM). Later, through the post classification confusion matrix accuracy assessment test is performed and the corresponding value of the kappa coefficient is obtained. In the result, we have concluded MDC as best in term of overall accuracy with 82.3634% and MLC with a kappa value of 0.7591. Finally, a peculiar relationship is developed in between classification accuracy and kappa coefficient

    Low-Rank and Sparse Decomposition for Hyperspectral Image Enhancement and Clustering

    Get PDF
    In this dissertation, some new algorithms are developed for hyperspectral imaging analysis enhancement. Tensor data format is applied in hyperspectral dataset sparse and low-rank decomposition, which could enhance the classification and detection performance. And multi-view learning technique is applied in hyperspectral imaging clustering. Furthermore, kernel version of multi-view learning technique has been proposed, which could improve clustering performance. Most of low-rank and sparse decomposition algorithms are based on matrix data format for HSI analysis. As HSI contains high spectral dimensions, tensor based extended low-rank and sparse decomposition (TELRSD) is proposed in this dissertation for better performance of HSI classification with low-rank tensor part, and HSI detection with sparse tensor part. With this tensor based method, HSI is processed in 3D data format, and information between spectral bands and pixels maintain integrated during decomposition process. This proposed algorithm is compared with other state-of-art methods. And the experiment results show that TELRSD has the best performance among all those comparison algorithms. HSI clustering is an unsupervised task, which aims to group pixels into different groups without labeled information. Low-rank sparse subspace clustering (LRSSC) is the most popular algorithms for this clustering task. The spatial-spectral based multi-view low-rank sparse subspace clustering (SSMLC) algorithms is proposed in this dissertation, which extended LRSSC with multi-view learning technique. In this algorithm, spectral and spatial views are created to generate multi-view dataset of HSI, where spectral partition, morphological component analysis (MCA) and principle component analysis (PCA) are applied to create others views. Furthermore, kernel version of SSMLC (k-SSMLC) also has been investigated. The performance of SSMLC and k-SSMLC are compared with sparse subspace clustering (SSC), low-rank sparse subspace clustering (LRSSC), and spectral-spatial sparse subspace clustering (S4C). It has shown that SSMLC could improve the performance of LRSSC, and k-SSMLC has the best performance. The spectral clustering has been proved that it equivalent to non-negative matrix factorization (NMF) problem. In this case, NMF could be applied to the clustering problem. In order to include local and nonlinear features in data source, orthogonal NMF (ONMF), graph-regularized NMF (GNMF) and kernel NMF (k-NMF) has been proposed for better clustering performance. The non-linear orthogonal graph NMF combine both kernel, orthogonal and graph constraints in NMF (k-OGNMF), which push up the clustering performance further. In the HSI domain, kernel multi-view based orthogonal graph NMF (k-MOGNMF) is applied for subspace clustering, where k-OGNMF is extended with multi-view algorithm, and it has better performance and computation efficiency

    Spectral Textile Detection in the VNIR/SWIR Band

    Get PDF
    Dismount detection, the detection of persons on the ground and outside of a vehicle, has applications in search and rescue, security, and surveillance. Spatial dismount detection methods lose e effectiveness at long ranges, and spectral dismount detection currently relies on detecting skin pixels. In scenarios where skin is not exposed, spectral textile detection is a more effective means of detecting dismounts. This thesis demonstrates the effectiveness of spectral textile detectors on both real and simulated hyperspectral remotely sensed data. Feature selection methods determine sets of wavebands relevant to spectral textile detection. Classifiers are trained on hyperspectral contact data with the selected wavebands, and classifier parameters are optimized to improve performance on a training set. Classifiers with optimized parameters are used to classify contact data with artificially added noise and remotely-sensed hyperspectral data. The performance of optimized classifiers on hyperspectral data is measured with Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) curve. The best performances on the contact data are 0.892 and 0.872 for Multilayer Perceptrons (MLPs) and Support Vector Machines (SVMs), respectively. The best performances on the remotely-sensed data are AUC = 0.947 and AUC = 0.970 for MLPs and SVMs, respectively. The difference in classifier performance between the contact and remotely-sensed data is due to the greater variety of textiles represented in the contact data. Spectral textile detection is more reliable in scenarios with a small variety of textiles

    Techniques of design optimisation for algorithms implemented in software

    Get PDF
    The overarching objective of this thesis was to develop tools for parallelising, optimising, and implementing algorithms on parallel architectures, in particular General Purpose Graphics Processors (GPGPUs). Two projects were chosen from different application areas in which GPGPUs are used: a defence application involving image compression, and a modelling application in bioinformatics (computational immunology). Each project had its own specific objectives, as well as supporting the overall research goal. The defence / image compression project was carried out in collaboration with the Jet Propulsion Laboratories. The specific questions were: to what extent an algorithm designed for bit-serial for the lossless compression of hyperspectral images on-board unmanned vehicles (UAVs) in hardware could be parallelised, whether GPGPUs could be used to implement that algorithm, and whether a software implementation with or without GPGPU acceleration could match the throughput of a dedicated hardware (FPGA) implementation. The dependencies within the algorithm were analysed, and the algorithm parallelised. The algorithm was implemented in software for GPGPU, and optimised. During the optimisation process, profiling revealed less than optimal device utilisation, but no further optimisations resulted in an improvement in speed. The design had hit a local-maximum of performance. Analysis of the arithmetic intensity and data-flow exposed flaws in the standard optimisation metric of kernel occupancy used for GPU optimisation. Redesigning the implementation with revised criteria (fused kernels, lower occupancy, and greater data locality) led to a new implementation with 10x higher throughput. GPGPUs were shown to be viable for on-board implementation of the CCSDS lossless hyperspectral image compression algorithm, exceeding the performance of the hardware reference implementation, and providing sufficient throughput for the next generation of image sensor as well. The second project was carried out in collaboration with biologists at the University of Arizona and involved modelling a complex biological system – VDJ recombination involved in the formation of T-cell receptors (TCRs). Generation of immune receptors (T cell receptor and antibodies) by VDJ recombination is an enormously complex process, which can theoretically synthesize greater than 1018 variants. Originally thought to be a random process, the underlying mechanisms clearly have a non-random nature that preferentially creates a small subset of immune receptors in many individuals. Understanding this bias is a longstanding problem in the field of immunology. Modelling the process of VDJ recombination to determine the number of ways each immune receptor can be synthesized, previously thought to be untenable, is a key first step in determining how this special population is made. The computational tools developed in this thesis have allowed immunologists for the first time to comprehensively test and invalidate a longstanding theory (convergent recombination) for how this special population is created, while generating the data needed to develop novel hypothesis

    Development of a spectral unmixing procedure using a genetic algorithm and spectral shape

    Get PDF
    xvi, 85 leaves : ill. (chiefly col.) ; 29 cmSpectral unmixing produces spatial abundance maps of endmembers or ‘pure’ materials using sub-pixel scale decomposition. It is particularly well suited to extracting a greater portion of the rich information content in hyperspectral data in support of real-world issues such as mineral exploration, resource management, agriculture and food security, pollution detection, and climate change. However, illumination or shading effects, signature variability, and the noise are problematic. The Least Square (LS) based spectral unmixing technique such as Non-Negative Sum Less or Equal to One (NNSLO) depends on “shade” endmembers to deal with the amplitude errors. Furthermore, the LS-based method does not consider amplitude errors in abundance constraint calculations, thus, often leads to abundance errors. The Spectral Angle Constraint (SAC) reduces the amplitude errors, but the abundance errors remain because of using fully constrained condition. In this study, a Genetic Algorithm (GA) was adapted to resolve these issues using a series of iterative computations based on the Darwinian strategy of ‘survival of the fittest’ to improve the accuracy of abundance estimates. The developed GA uses a Spectral Angle Mapper (SAM) based fitness function to calculate abundances by satisfying a SAC-based weakly constrained condition. This was validated using two hyperspectral data sets: (i) a simulated hyperspectral dataset with embedded noise and illumination effects and (ii) AVIRIS data acquired over Cuprite, Nevada, USA. Results showed that the new GA-based unmixing method improved the abundance estimation accuracies and was less sensitive to illumination effects and noise compared to existing spectral unmixing methods, such as the SAC and NNSLO. In case of synthetic data, the GA increased the average index of agreement between true and estimated abundances by 19.83% and 30.10% compared to the SAC and the NNSLO, respectively. Furthermore, in case of real data, GA improved the overall accuracy by 43.1% and 9.4% compared to the SAC and NNSLO, respectively

    Model Checking Temporal Logic Formulas Using Sticker Automata

    Get PDF
    As an important complex problem, the temporal logic model checking problem is still far from being fully resolved under the circumstance of DNA computing, especially Computation Tree Logic (CTL), Interval Temporal Logic (ITL), and Projection Temporal Logic (PTL), because there is still a lack of approaches for DNA model checking. To address this challenge, a model checking method is proposed for checking the basic formulas in the above three temporal logic types with DNA molecules. First, one-type single-stranded DNA molecules are employed to encode the Finite State Automaton (FSA) model of the given basic formula so that a sticker automaton is obtained. On the other hand, other single-stranded DNA molecules are employed to encode the given system model so that the input strings of the sticker automaton are obtained. Next, a series of biochemical reactions are conducted between the above two types of single-stranded DNA molecules. It can then be decided whether the system satisfies the formula or not. As a result, we have developed a DNA-based approach for checking all the basic formulas of CTL, ITL, and PTL. The simulated results demonstrate the effectiveness of the new method
    corecore