12 research outputs found

    WP艁YW ANALIZY G艁脫WNYCH SK艁ADOWYCH CECH TEKSTURY NA JAKO艢膯 KLASYFIKACJI OBRAZ脫W TKANKI G膭BCZASTEJ

    Get PDF
    The aim of this article was to determine the effect of principal component analysis on the results of classification of spongy tissue images. Four hundred computed tomography images of the spine (L1 vertebra) were used for the analyses. The images were from fifty healthy patients and fifty patients diagnosed with osteoporosis. The obtained tissue image samples with a size of 50x50 pixels were subjected to texture analysis. As a result, feature descriptors based on a grey level histogram, gradient matrix, RL matrix, event matrix, autoregressive model and wavelet transform were obtained. The results obtained were ranked in importance from the most important to the least important. The first fifty features from the ranking were used for further experiments. The data were subjected to the principal component analysis, which resulted in a set of six new features. Subsequently, both sets (50 and 6 traits) were classified using five different methods: naive Bayesian classifier, multilayer perceptrons, Hoeffding Tree, 1-Nearest Neighbour and Random Forest. The best results were obtained for data on which principal components analysis was performed and classified using 1-Nearest Neighbour. Such an algorithm of procedure allowed to obtain a high value of TPR and PPV parameters, equal to 97.5%. In the case of other classifiers, the use of principal component analysis worsened the results by an average of 2%.Celem niniejszego artyku艂u by艂o okre艣lenie wp艂ywu analizy g艂贸wnych sk艂adowych na wyniki klasyfikacji obraz贸w tkanki g膮bczastej. Do analiz wykorzystano czterysta obraz贸w tomografii komputerowej kr臋gos艂upa (kr臋g L1). Obrazy pochodzi艂y od pi臋膰dziesi臋ciu zdrowych pacjent贸w oraz pi臋膰dziesi臋ciu pacjent贸w ze zdiagnozowan膮 osteoporoz膮. Uzyskane pr贸bki obrazowe tkanki o wymiarze 50x50 pikseli poddano analizie tekstury. W wyniku tego otrzymano deskryptory cech oparte na histogramie poziom贸w szaro艣ci, macierzy gradientu, macierzy RL, macierzy zdarze艅, modelu autoregresji i transformacie falkowej. Otrzymane wyniki ustawiono w rankingu wa偶no艣ci od najistotniejszej do najmniej wa偶nej. Pi臋膰dziesi膮t pierwszych cech z rankingu  wykorzystano do dalszych eksperyment贸w. Dane zosta艂y poddane analizie g艂贸wnych sk艂adowych wskutek czego uzyskano zbi贸r sze艣ciu nowych cech. Nast臋pnie oba zbiory (50 i 6 cech) zosta艂y poddane klasyfikacji przy u偶yciu pi臋ciu r贸偶nych metod: naiwnego klasyfikatora Bayesa, wielowarstwowych perceptron贸w, Hoeffding Tree, 1-Nearest Neighbour and Random Forest. Najlepsze wyniki uzyskano dla danych, na kt贸rych przeprowadzono analiz臋 g艂贸wnych sk艂adowych i poddano klasyfikacji za pomoc膮 1-Nearest Neighbour. Taki algorytm post臋powania pozwoli艂 na uzyskanie wysokiej warto艣ci parametr贸w TPR oraz PPV, r贸wnych 97,5%. W przypadku pozosta艂ych klasyfikator贸w zastosowanie analizy g艂贸wnych sk艂adowych pogorszy艂o wyniki 艣rednio o 2%

    Optimal Clustering Framework for Hyperspectral Band Selection

    Full text link
    Band selection, by choosing a set of representative bands in hyperspectral image (HSI), is an effective method to reduce the redundant information without compromising the original contents. Recently, various unsupervised band selection methods have been proposed, but most of them are based on approximation algorithms which can only obtain suboptimal solutions toward a specific objective function. This paper focuses on clustering-based band selection, and proposes a new framework to solve the above dilemma, claiming the following contributions: 1) An optimal clustering framework (OCF), which can obtain the optimal clustering result for a particular form of objective function under a reasonable constraint. 2) A rank on clusters strategy (RCS), which provides an effective criterion to select bands on existing clustering structure. 3) An automatic method to determine the number of the required bands, which can better evaluate the distinctive information produced by certain number of bands. In experiments, the proposed algorithm is compared to some state-of-the-art competitors. According to the experimental results, the proposed algorithm is robust and significantly outperform the other methods on various data sets

    Unsupervised Feature Learning by Autoencoder and Prototypical Contrastive Learning for Hyperspectral Classification

    Full text link
    Unsupervised learning methods for feature extraction are becoming more and more popular. We combine the popular contrastive learning method (prototypical contrastive learning) and the classic representation learning method (autoencoder) to design an unsupervised feature learning network for hyperspectral classification. Experiments have proved that our two proposed autoencoder networks have good feature learning capabilities by themselves, and the contrastive learning network we designed can better combine the features of the two to learn more representative features. As a result, our method surpasses other comparison methods in the hyperspectral classification experiments, including some supervised methods. Moreover, our method maintains a fast feature extraction speed than baseline methods. In addition, our method reduces the requirements for huge computing resources, separates feature extraction and contrastive learning, and allows more researchers to conduct research and experiments on unsupervised contrastive learning

    H-RNet: hybrid rlation network for few-shot learning-based hyperspectral image classification.

    Get PDF
    Deep network models rely on sufficient training samples to perform reasonably well, which has inevitably constrained their application in classification of hyperspectral images (HSIs) due to the limited availability of labeled data. To tackle this particular challenge, we propose a hybrid relation network, H-RNet, by combining three-dimensional (3-D) convolution neural networks (CNN) and two-dimensional (2-D) CNN to extract the spectral鈥搒patial features whilst reducing the complexity of the network. In an end-to-end relation learning module, the sample pairing approach can effectively alleviate the problem of few labeled samples and learn correlations between samples more accurately for more effective classification. Experimental results on three publicly available datasets have fully demonstrated the superior performance of the proposed model in comparison to a few state-of-the-art methods

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial鈥搒pectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends
    corecore