979 research outputs found
Hierarchical Metric Learning for Optical Remote Sensing Scene Categorization
We address the problem of scene classification from optical remote sensing
(RS) images based on the paradigm of hierarchical metric learning. Ideally,
supervised metric learning strategies learn a projection from a set of training
data points so as to minimize intra-class variance while maximizing inter-class
separability to the class label space. However, standard metric learning
techniques do not incorporate the class interaction information in learning the
transformation matrix, which is often considered to be a bottleneck while
dealing with fine-grained visual categories. As a remedy, we propose to
organize the classes in a hierarchical fashion by exploring their visual
similarities and subsequently learn separate distance metric transformations
for the classes present at the non-leaf nodes of the tree. We employ an
iterative max-margin clustering strategy to obtain the hierarchical
organization of the classes. Experiment results obtained on the large-scale
NWPU-RESISC45 and the popular UC-Merced datasets demonstrate the efficacy of
the proposed hierarchical metric learning based RS scene recognition strategy
in comparison to the standard approaches.Comment: Undergoing revision in GRS
Compressive Raman imaging with spatial frequency modulated illumination
We report a line scanning imaging modality of compressive Raman technology
with spatial frequency modulated illumination using a single pixel detector. We
demonstrate the imaging and classification of three different chemical species
at line scan rates of 40 Hz
Masked Auto-Encoding Spectral-Spatial Transformer for Hyperspectral Image Classification
Deep learning has certainly become the dominant trend in hyperspectral (HS) remote sensing (RS) image classification owing to its excellent capabilities to extract highly discriminating spectral–spatial features. In this context, transformer networks have recently shown prominent results in distinguishing even the most subtle spectral differences because of their potential to characterize sequential spectral data. Nonetheless, many complexities affecting HS remote sensing data (e.g., atmospheric effects, thermal noise, quantization noise) may severely undermine such potential since no mode of relieving noisy feature patterns has still been developed within transformer networks. To address the problem, this article presents a novel masked auto-encoding spectral–spatial transformer (MAEST), which gathers two different collaborative branches: 1) a reconstruction path, which dynamically uncovers the most robust encoding features based on a masking auto-encoding strategy, and 2) a classification path, which embeds these features onto a transformer network to classify the data focusing on the features that better reconstruct the input. Unlike other existing models, this novel design pursues to learn refined transformer features considering the aforementioned complexities of the HS remote sensing image domain. The experimental comparison, including several state-of-the-art methods and benchmark datasets, shows the superior results obtained by MAEST. The codes of this article will be available at https://github.com/ibanezfd/MAEST
Investigation on the potential of hyperspectral and Sentinel-2 data for land-cover / land-use classification
The automated analysis of large areas with respect to land-cover and land-use is nowadays typically performed based on the use of hyperspectral or multispectral data acquired from airborne or spaceborne platforms. While hyperspectral data offer a more detailed description of the spectral properties of the Earth’s surface and thus a great potential for a variety of applications, multispectral data are less expensive and available in shorter time intervals which allows for time series analyses. Particularly with the recent availability of multispectral Sentinel-2 data, it seems desirable to have a comparative assessment of the potential of both types of data for land-cover and land-use classification. In this paper, we focus on such a comparison and therefore involve both types of data. On the one hand, we focus on the potential of hyperspectral data and the commonly applied techniques for data-driven dimensionality reduction or feature selection based on these hyperspectral data. On the other hand, we aim to reason about the potential of Sentinel-2 data and therefore transform the acquired hyperspectral data to Sentinel-2-like data. For performance evaluation, we provide classification results achieved with the different types of data for two standard benchmark datasets representing an urban area and an agricultural area, respectively
- …