174 research outputs found
Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network
The detection performance of small objects in remote sensing images is not
satisfactory compared to large objects, especially in low-resolution and noisy
images. A generative adversarial network (GAN)-based model called enhanced
super-resolution GAN (ESRGAN) shows remarkable image enhancement performance,
but reconstructed images miss high-frequency edge information. Therefore,
object detection performance degrades for small objects on recovered noisy and
low-resolution remote sensing images. Inspired by the success of edge enhanced
GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN
(EESRGAN) to improve the image quality of remote sensing images and use
different detector networks in an end-to-end manner where detector loss is
backpropagated into the EESRGAN to improve the detection performance. We
propose an architecture with three components: ESRGAN, Edge Enhancement Network
(EEN), and Detection network. We use residual-in-residual dense blocks (RRDB)
for both the ESRGAN and EEN, and for the detector network, we use the faster
region-based convolutional network (FRCNN) (two-stage detector) and single-shot
multi-box detector (SSD) (one stage detector). Extensive experiments on a
public (car overhead with context) and a self-assembled (oil and gas storage
tank) satellite dataset show superior performance of our method compared to the
standalone state-of-the-art object detectors.Comment: This paper contains 27 pages and accepted for publication in MDPI
remote sensing journal. GitHub Repository:
https://github.com/Jakaria08/EESRGAN (Implementation
Low-Light Hyperspectral Image Enhancement
Due to inadequate energy captured by the hyperspectral camera sensor in poor
illumination conditions, low-light hyperspectral images (HSIs) usually suffer
from low visibility, spectral distortion, and various noises. A range of HSI
restoration methods have been developed, yet their effectiveness in enhancing
low-light HSIs is constrained. This work focuses on the low-light HSI
enhancement task, which aims to reveal the spatial-spectral information hidden
in darkened areas. To facilitate the development of low-light HSI processing,
we collect a low-light HSI (LHSI) dataset of both indoor and outdoor scenes.
Based on Laplacian pyramid decomposition and reconstruction, we developed an
end-to-end data-driven low-light HSI enhancement (HSIE) approach trained on the
LHSI dataset. With the observation that illumination is related to the
low-frequency component of HSI, while textural details are closely correlated
to the high-frequency component, the proposed HSIE is designed to have two
branches. The illumination enhancement branch is adopted to enlighten the
low-frequency component with reduced resolution. The high-frequency refinement
branch is utilized for refining the high-frequency component via a predicted
mask. In addition, to improve information flow and boost performance, we
introduce an effective channel attention block (CAB) with residual dense
connection, which served as the basic block of the illumination enhancement
branch. The effectiveness and efficiency of HSIE both in quantitative
assessment measures and visual effects are demonstrated by experimental results
on the LHSI dataset. According to the classification performance on the remote
sensing Indian Pines dataset, downstream tasks benefit from the enhanced HSI.
Datasets and codes are available:
\href{https://github.com/guanguanboy/HSIE}{https://github.com/guanguanboy/HSIE}
DDRF: Denoising Diffusion Model for Remote Sensing Image Fusion
Denosing diffusion model, as a generative model, has received a lot of
attention in the field of image generation recently, thanks to its powerful
generation capability. However, diffusion models have not yet received
sufficient research in the field of image fusion. In this article, we introduce
diffusion model to the image fusion field, treating the image fusion task as
image-to-image translation and designing two different conditional injection
modulation modules (i.e., style transfer modulation and wavelet modulation) to
inject coarse-grained style information and fine-grained high-frequency and
low-frequency information into the diffusion UNet, thereby generating fused
images. In addition, we also discussed the residual learning and the selection
of training objectives of the diffusion model in the image fusion task.
Extensive experimental results based on quantitative and qualitative
assessments compared with benchmarks demonstrates state-of-the-art results and
good generalization performance in image fusion tasks. Finally, it is hoped
that our method can inspire other works and gain insight into this field to
better apply the diffusion model to image fusion tasks. Code shall be released
for better reproducibility
Unsupervised spectral sub-feature learning for hyperspectral image classification
Spectral pixel classification is one of the principal techniques used in hyperspectral image (HSI) analysis. In this article, we propose an unsupervised feature learning method for classification of hyperspectral images. The proposed method learns a dictionary of sub-feature basis representations from the spectral domain, which allows effective use of the correlated spectral data. The learned dictionary is then used in encoding convolutional samples from the hyperspectral input pixels to an expanded but sparse feature space. Expanded hyperspectral feature representations enable linear separation between object classes present in an image. To evaluate the proposed method, we performed experiments on several commonly used HSI data sets acquired at different locations and by different sensors. Our experimental results show that the proposed method outperforms other pixel-wise classification methods that make use of unsupervised feature extraction approaches. Additionally, even though our approach does not use any prior knowledge, or labelled training data to learn features, it yields either advantageous, or comparable, results in terms of classification accuracy with respect to recent semi-supervised methods
- …