234 research outputs found
Cross-Scope Spatial-Spectral Information Aggregation for Hyperspectral Image Super-Resolution
Hyperspectral image super-resolution has attained widespread prominence to
enhance the spatial resolution of hyperspectral images. However,
convolution-based methods have encountered challenges in harnessing the global
spatial-spectral information. The prevailing transformer-based methods have not
adequately captured the long-range dependencies in both spectral and spatial
dimensions. To alleviate this issue, we propose a novel cross-scope
spatial-spectral Transformer (CST) to efficiently investigate long-range
spatial and spectral similarities for single hyperspectral image
super-resolution. Specifically, we devise cross-attention mechanisms in spatial
and spectral dimensions to comprehensively model the long-range
spatial-spectral characteristics. By integrating global information into the
rectangle-window self-attention, we first design a cross-scope spatial
self-attention to facilitate long-range spatial interactions. Then, by
leveraging appropriately characteristic spatial-spectral features, we construct
a cross-scope spectral self-attention to effectively capture the intrinsic
correlations among global spectral bands. Finally, we elaborate a concise
feed-forward neural network to enhance the feature representation capacity in
the Transformer structure. Extensive experiments over three hyperspectral
datasets demonstrate that the proposed CST is superior to other
state-of-the-art methods both quantitatively and visually. The code is
available at \url{https://github.com/Tomchenshi/CST.git}
- …