11 research outputs found

    Unsupervised Classification of Polarimetric SAR Images via Riemannian Sparse Coding

    Get PDF
    Unsupervised classification plays an important role in understanding polarimetric synthetic aperture radar (PolSAR) images. One of the typical representations of PolSAR data is in the form of Hermitian positive definite (HPD) covariance matrices. Most algorithms for unsupervised classification using this representation either use statistical distribution models or adopt polarimetric target decompositions. In this paper, we propose an unsupervised classification method by introducing a sparsity-based similarity measure on HPD matrices. Specifically, we first use a novel Riemannian sparse coding scheme for representing each HPD covariance matrix as sparse linear combinations of other HPD matrices, where the sparse reconstruction loss is defined by the Riemannian geodesic distance between HPD matrices. The coefficient vectors generated by this step reflect the neighborhood structure of HPD matrices embedded in the Euclidean space and hence can be used to define a similarity measure. We apply the scheme for PolSAR data, in which we first oversegment the images into superpixels, followed by representing each superpixel by an HPD matrix. These HPD matrices are then sparse coded, and the resulting sparse coefficient vectors are then clustered by spectral clustering using the neighborhood matrix generated by our similarity measure. The experimental results on different fully PolSAR images demonstrate the superior performance of the proposed classification approach against the state-of-the-art approachesThis work was supported in part by the National Natural Science Foundation of China under Grant 61331016 and Grant 61271401 and in part by the National Key Basic Research and Development Program of China under Contract 2013CB733404. The work of A. Cherian was supported by the Australian Research Council Centre of Excellence for Robotic Vision under Project CE140100016.

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Semantic location extraction from crowdsourced data

    Get PDF
    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction

    Обработка радиолокационных изображений: монография

    Full text link
    Книга посвящена решению теоретических и практических проблем обнаружения, измерения параметров и классификации пространственно-распределённых целей (ПРЦ) по их радиолокационным изображениям (РЛИ), формируемым в многопозиционной системе наблюдения, реализованной группой космических аппаратов. В книге подробно рассмотрены методы синтеза и анализа алгоритмов классификации ПРЦ, алгоритмы оценки параметров РЛИ, алгоритмы классификации с использованием нейронных сетей, частично-когерентных РЛС, алгоритмы формирования РЛИ движущихся объектов, методы фильтрации спекл-шума, методы анализа помехоустойчивости, методы геокоррекции формируемых РЛИ. Книга представляет интерес для специалистов, студентов и аспирантов, работающих в области разработки современных радиотехнических систем военного и гражданского назначения

    Nearest-Regularized Subspace Classification for PolSAR Imagery Using Polarimetric Feature Vector and Spatial Information

    No full text
    Feature extraction using polarimetric synthetic aperture radar (PolSAR) images is of great interest in SAR classification, no matter if it is applied in an unsupervised approach or a supervised approach. In the supervised classification framework, a major group of methods is based on machine learning. Various machine learning methods have been investigated for PolSAR image classification, including neural network (NN), support vector machine (SVM), and so on. Recently, representation-based classifications have gained increasing attention in hyperspectral imagery, such as the newly-proposed sparse-representation classification (SRC) and nearest-regularized subspace (NRS). These classifiers provide excellent performance that is comparable to or even better than the classic SVM for remotely-sensed image processing. However, rare studies have been found to extend this representation-based NRS classification into PolSAR images. By the use of the NRS approach, a polarimetric feature vector-based PolSAR image classification method is proposed in this paper. The polarimetric SAR feature vector is constructed by the components of different target decomposition algorithms for each pixel, including those scattering components of Freeman, Huynen, Krogager, Yamaguchi decomposition, as well as the eigenvalues, eigenvectors and their consequential parameters such as entropy, anisotropy and mean scattering angle. Furthermore, because all these representation-based methods were originally designed to be pixel-wise classifiers, which only consider the separate pixel signature while ignoring the spatial-contextual information, the Markov random field (MRF) model is also introduced in our scheme. MRF can provide a basis for modeling contextual constraints. Two AIRSAR data in the Flevoland area are used to validate the proposed classification scheme. Experimental results demonstrate that the proposed method can reach an accuracy of around 99 % for both AIRSAR data by randomly selecting 300 pixels of each class as the training samples. Under the condition that the training data ratio is more than 4 % , it has better performance than the SVM, SVM-MRF and NRS methods
    corecore