6 research outputs found

    Spatial-Spectral Manifold Embedding of Hyperspectral Data

    Get PDF
    In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatial-spectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods

    Spatial-Spectral Manifold Embedding of Hyperspectral Data

    Get PDF
    In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatialspectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods

    A Constrained Convex Optimization Approach to Hyperspectral Image Restoration with Hybrid Spatio-Spectral Regularization

    Full text link
    We propose a new constrained optimization approach to hyperspectral (HS) image restoration. Most existing methods restore a desirable HS image by solving some optimization problem, which consists of a regularization term(s) and a data-fidelity term(s). The methods have to handle a regularization term(s) and a data-fidelity term(s) simultaneously in one objective function, and so we need to carefully control the hyperparameter(s) that balances these terms. However, the setting of such hyperparameters is often a troublesome task because their suitable values depend strongly on the regularization terms adopted and the noise intensities on a given observation. Our proposed method is formulated as a convex optimization problem, where we utilize a novel hybrid regularization technique named Hybrid Spatio-Spectral Total Variation (HSSTV) and incorporate data-fidelity as hard constraints. HSSTV has a strong ability of noise and artifact removal while avoiding oversmoothing and spectral distortion, without combining other regularizations such as low-rank modeling-based ones. In addition, the constraint-type data-fidelity enables us to translate the hyperparameters that balance between regularization and data-fidelity to the upper bounds of the degree of data-fidelity that can be set in a much easier manner. We also develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) to efficiently solve the optimization problem. Through comprehensive experiments, we illustrate the advantages of the proposed method over various HS image restoration methods including state-of-the-art ones.Comment: 20 pages, 4 tables, 10 figures, submitted to MDPI Remote Sensin

    A Robust PCA Approach With Noise Structure Learning and Spatial–Spectral Low-Rank Modeling for Hyperspectral Image Restoration

    No full text

    Redes Neuronais Pré-Treinadas na Classificação Automática de Sons Cardíacos

    Get PDF
    As doenças cardiovasculares são uma das principais causas de morte e hospitalização, tanto em países desenvolvidos como em desenvolvimento. O seu diagnóstico requer intervenção profissional e equipamento específico, sendo normalmente dispendioso. O desenvolvimento de algoritmos capazes de segmentar e classificar sinais dos batimentos cardíacos beneficia esta área, uma vez que muitas doenças cardiovasculares se manifestam como irregularidades nos mesmos. Estes algoritmos servirão de apoio ao diagnóstico para os profissionais de saúde e oferecem a possibilidade de serem incorporados em dispositivos próprios para uso doméstico reduzindo a necessidade de consumo de recursos hospitalares ou de centros privados de saúde. No entanto, até ao momento, não existem implementações, clínicas ou não, destes métodos. Nos últimos anos, vários algoritmos de classificação baseados em diferentes técnicas surgiram e bases de dados vastas e de livre acesso foram disponibilizadas procurando estabelecer um ponto de comparação da eficácia dos mesmos. A presente dissertação explora a eficácia da utilização de redes neuronais pré-treinadas na classificação dos sinais disponibilizados no PhysioNet/CinC Challenge 2016, uma das maiores bases de dados de fonocardiogramas já reunida. A melhor rede gerada obteve uma precisão de classificação de 80.85%, uma sensibilidade de 79.77% e uma especificidade de 81.94%, estando em linha com resultados obtidos por métodos diferentes e recorrendo a menos pré-processamento do sinal.Cardiovascular diseases are the leading cause of hospitalization and death, in both developed and developing countries. Its diagnosis requires expert intervention as well as specialized equipment, being costly. The development of algorithms capable of segmenting and classifying signals from the heartbeat benefits this field since many cardiovascular diseases manifest themselves through irregular heartbeats. These algorithms will serve as a clinical decision support system for health professionals and offer the opportunity of creating domestic devices, reducing the need for hospital and private centres resource consumption. However, at the moment, there is no clinical or otherwise implementation of such technology. In the last years, many classification algorithms working on different techniques have emerged and vast open source databases have been made available looking to establish a comparison between those methods. This dissertation aims to test the efficiency of pre-trained neural networks in the classification of signals retrieved from the PhysioNet/CinC Challenge 2016, one of the largest collection of PCG ever assembled. Our best network achieved an accuracy of 80.85%, a recall of 79.77% and a specificity of 81.94%, being competitive with other methods and requiring less signal processing
    corecore