251 research outputs found
A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing
Remote sensing images are essential for many earth science applications, but
their quality can be degraded due to limitations in sensor technology and
complex imaging environments. To address this, various remote sensing image
deblurring methods have been developed to restore sharp, high-quality images
from degraded observational data. However, most traditional model-based
deblurring methods usually require predefined hand-craft prior assumptions,
which are difficult to handle in complex applications, and most deep
learning-based deblurring methods are designed as a black box, lacking
transparency and interpretability. In this work, we propose a novel blind
deblurring learning framework based on alternating iterations of shrinkage
thresholds, alternately updating blurring kernels and images, with the
theoretical foundation of network design. Additionally, we propose a learnable
blur kernel proximal mapping module to improve the blur kernel evaluation in
the kernel domain. Then, we proposed a deep proximal mapping module in the
image domain, which combines a generalized shrinkage threshold operator and a
multi-scale prior feature extraction block. This module also introduces an
attention mechanism to adaptively adjust the prior importance, thus avoiding
the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale
generalized shrinkage threshold network (MGSTNet) is designed to specifically
focus on learning deep geometric prior features to enhance image restoration.
Experiments demonstrate the superiority of our MGSTNet framework on remote
sensing image datasets compared to existing deblurring methods.Comment: 12 pages
Image Restoration for Remote Sensing: Overview and Toolbox
Remote sensing provides valuable information about objects or areas from a
distance in either active (e.g., RADAR and LiDAR) or passive (e.g.,
multispectral and hyperspectral) modes. The quality of data acquired by
remotely sensed imaging sensors (both active and passive) is often degraded by
a variety of noise types and artifacts. Image restoration, which is a vibrant
field of research in the remote sensing community, is the task of recovering
the true unknown image from the degraded observed image. Each imaging sensor
induces unique noise types and artifacts into the observed image. This fact has
led to the expansion of restoration techniques in different paths according to
each sensor type. This review paper brings together the advances of image
restoration techniques with particular focuses on synthetic aperture radar and
hyperspectral images as the most active sub-fields of image restoration in the
remote sensing community. We, therefore, provide a comprehensive,
discipline-specific starting point for researchers at different levels (i.e.,
students, researchers, and senior researchers) willing to investigate the
vibrant topic of data restoration by supplying sufficient detail and
references. Additionally, this review paper accompanies a toolbox to provide a
platform to encourage interested students and researchers in the field to
further explore the restoration techniques and fast-forward the community. The
toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS
Unsupervised domain adaptation and super resolution on drone images for autonomous dry herbage biomass estimation
Herbage mass yield and composition estimation is an important tool for dairy
farmers to ensure an adequate supply of high quality herbage for grazing and
subsequently milk production. By accurately estimating herbage mass and
composition, targeted nitrogen fertiliser application strategies can be
deployed to improve localised regions in a herbage field, effectively reducing
the negative impacts of over-fertilization on biodiversity and the environment.
In this context, deep learning algorithms offer a tempting alternative to the
usual means of sward composition estimation, which involves the destructive
process of cutting a sample from the herbage field and sorting by hand all
plant species in the herbage. The process is labour intensive and time
consuming and so not utilised by farmers. Deep learning has been successfully
applied in this context on images collected by high-resolution cameras on the
ground. Moving the deep learning solution to drone imaging, however, has the
potential to further improve the herbage mass yield and composition estimation
task by extending the ground-level estimation to the large surfaces occupied by
fields/paddocks. Drone images come at the cost of lower resolution views of the
fields taken from a high altitude and requires further herbage ground-truth
collection from the large surfaces covered by drone images. This paper proposes
to transfer knowledge learned on ground-level images to raw drone images in an
unsupervised manner. To do so, we use unpaired image style translation to
enhance the resolution of drone images by a factor of eight and modify them to
appear closer to their ground-level counterparts. We then ...
~\url{www.github.com/PaulAlbert31/Clover_SSL}.Comment: 11 pages, 5 figures. Accepted at the Agriculture-Vision CVPR 2022
Worksho
Using generative adversarial networks for extraction of insar signals from large-scale Sentinel-1 interferograms by improving tropospheric noise correction
Spatiotemporal variations of pressure, temperature, water vapour content in the atmosphere lead to significant delays in interferometric synthetic aperture radar (InSAR) measurements of deformations in the ground. One of the key challenges in increasing the accuracy of ground deformation measurements using InSAR is to produce robust estimates of the tropospheric delay. Tropospheric models like ERA-Interim can be used to estimate the total tropospheric delay in interferograms in remote areas. The problem with using ERA-Interim model for interferogram correction is that after the tropospheric correction, there are still some residuals left in the interferograms, which can be mainly attributed to turbulent troposphere. In this study, we propose a Generative Adversarial Network (GAN) based approach to mitigate the phase delay caused by troposphere. In this method, we implement a noise to noise model, where the network is trained only with the interferograms corrupted by tropospheric noise. We applied the technique over 116 large scale 800 km long interfergrams formed from Sentinel-1 acquisitions covering a period from 25th October, 2014 to 2nd November, 2017 from descending track numbered 108 over Iran. Our approach reduces the root mean square of the phase values of the interferogram 64% compared to those of the original interferogram and by 55% in comparison to the corresponding ERA-Interim corrected version
Pan-sharpening of WorldView-2 images with deep learning
In recent years the exponential growth on Deep Learning interest has had a huge impact on improving the resolution of images. In particular, enhancing the quality of remote sensing imagery is a field where many models have been proposed by different researchers. One of this approaches is pan-sharpening, which takes advantage from the satellites imagery pairs in order to raise the resolution of multispectral or hyperspectral images. In this project, a model from the literature will be adapted for WorldView-2 satellite imagery and modified to improve the current stated results from the model. Experiments results will be compared between the adapted model and the modified one so the adjustments effectiveness can be proven.En los últimos años el incremento exponencial del interés por el Aprendizaje Profundo ha tenido un gran impacto en la mejora de la resolución de imágenes. En particular, enriquecer la calidad de las imágenes captadas con teledetección es un campo donde distintos investigadores han propuesto varios modelos. Uno de estos enfoques es el pan-sharpening, que aprovecha los pares de imágenes de los satélites para incrementar la resolución de imágenes multiespectrales o hiperespectrales. En este proyecto, un modelo de la literatura se adaptará para imágenes del satélite WorldView-2 y será modificado para mejorar los resultados establecidos en la actualidad por el modelo. Los resultados de los experimentos se compararán entre el modelo adaptado y el modelo modificado para verificar que los cambios realizados son efectivos.En els darrers anys l'increment exponencial de l'interès per l'Aprenentatge Profund ha tingut un gran impacte en la millora de la resolució d'imatges. En particular, enriquir la qualitat de les imatges captades per teledetecció és un camp diferents investigadors han proposat diversos models. Un d'aquests enfocaments és el pan-sharpening, que aprofita els parells d'imatges dels satél·lits per incrementar la resolució d'imatges multiespectrals o hiperespectrals. En aquest projecte, un model de la literatura s'adaptará per a imatges del satél·lit WorldView-2 i será modificat per millorar els resultats establerts pel model actualment. Els resultats dels experiments es compararan entre el model adaptat i el model modificat per tal de verificar l'efectivitat del canvis realitzats
Towards Streamlined Single-Image Super-Resolution: Demonstration with 10 m Sentinel-2 Colour and 10-60 m Multi-Spectral VNIR and SWIR Bands
Higher spatial resolution imaging data are considered desirable in many Earth observation applications. In this work, we propose and demonstrate the TARSGAN (learning Terrestrial image deblurring using Adaptive weighted dense Residual Super-resolution Generative Adversarial Network) system for Super-resolution Restoration (SRR) of 10 m/pixel Sentinel-2 “true” colour images as well as all the other multispectral bands. In parallel, the ELF (automated image Edge detection and measurements of edge spread function, Line spread function, and Full width at half maximum) system is proposed to achieve automated and precise assessments of the effective resolutions of the input and SRR images. Subsequent ELF measurements of the TARSGAN SRR results suggest an averaged effective resolution enhancement factor of about 2.91 times (equivalent to ~3.44 m/pixel for the 10 m/pixel bands) given a nominal SRR upscaling factor of 4 times. Several examples are provided for different types of scenes from urban landscapes to agricultural scenes and sea-ice floes
- …