3,433 research outputs found
Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry
Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a
more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of
meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This
path is already being taken by the recent and fast-developing research in computational fields, however, some
issues related to computationally expensive processes in the integration of multi-source sensing data remain.
Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned
Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope,
many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and
multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant
contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and
hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields
concentrate most applications and are widely studied. Many challenges are currently being overcome by recent
methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image
datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that
are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are
presented.European Commission 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Junta de Andalucia 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU
FPU19/0010
Recent Advances in Image Restoration with Applications to Real World Problems
In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
Weakly Supervised Learning for Multi-Image Synthesis
Machine learning-based approaches have been achieving state-of-the-art results on many computer vision tasks. While deep learning and convolutional networks have been incredibly popular, these approaches come at the expense of huge amounts of labeled data required for training. Manually annotating large amounts of data, often millions of images in a single dataset, is costly and time consuming. To deal with the problem of data annotation, the research community has been exploring approaches that require less amount of labelled data.
The central problem that we consider in this research is image synthesis without any manual labeling. Image synthesis is a classic computer vision task that requires understanding of image contents and their semantic and geometric properties. We propose that we can train image synthesis models by relying on sequences of videos and using weakly supervised learning. Large amounts of unlabeled data are freely available on the internet. We propose to set up the training in a multi-image setting so that we can use one of the images as the target - this allows us to rely only on images for training and removes the need for manual annotations. We demonstrate three main contributions in this work.
First, we present a method of fusing multiple noisy overhead images to make a single, artifact-free image. We present a weakly supervised method that relies on crowd-sourced labels from online maps and a completely unsupervised variant that only requires a series of satellite images as inputs. Second, we propose a single-image novel view synthesis method for complex, outdoor scenes. We propose a learning-based method that uses pairs of nearby images captured on urban roads and their respective GPS coordinates as supervision. We show that a model trained with this automatically captured data can render a new view of a scene that can be as far as 10 meters from the input image. Third, we consider the problem of synthesizing new images of a scene under different conditions, such as time of day and season, based on a single input image. As opposed to existing methods, we do not need manual annotations for transient attributes, such as fog or snow, for training. We train our model by using streams of images captured from outdoor webcams and time-lapse videos.
Through these applications, we show several settings where we can train state-of-the-art deep learning methods without manual annotations. This work focuses on three image synthesis tasks. We propose weakly supervised learning and remove requirements for manual annotations by relying on sequences of images. Our approach is in line with the research efforts that aim to minimize the labels required for training machine learning methods
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Refined Equivalent Pinhole Model for Large-scale 3D Reconstruction from Spaceborne CCD Imagery
In this study, we present a large-scale earth surface reconstruction pipeline
for linear-array charge-coupled device (CCD) satellite imagery. While
mainstream satellite image-based reconstruction approaches perform
exceptionally well, the rational functional model (RFM) is subject to several
limitations. For example, the RFM has no rigorous physical interpretation and
differs significantly from the pinhole imaging model; hence, it cannot be
directly applied to learning-based 3D reconstruction networks and to more novel
reconstruction pipelines in computer vision. Hence, in this study, we introduce
a method in which the RFM is equivalent to the pinhole camera model (PCM),
meaning that the internal and external parameters of the pinhole camera are
used instead of the rational polynomial coefficient parameters. We then derive
an error formula for this equivalent pinhole model for the first time,
demonstrating the influence of the image size on the accuracy of the
reconstruction. In addition, we propose a polynomial image refinement model
that minimizes equivalent errors via the least squares method. The experiments
were conducted using four image datasets: WHU-TLC, DFC2019, ISPRS-ZY3, and GF7.
The results demonstrated that the reconstruction accuracy was proportional to
the image size. Our polynomial image refinement model significantly enhanced
the accuracy and completeness of the reconstruction, and achieved more
significant improvements for larger-scale images.Comment: 24 page
GEOMETRIC PROCESSING OF VERY HIGH-RESOLUTION SATELLITE IMAGERY: QUALITY ASSESSMENT FOR 3D MAPPING NEEDS
In recent decades, the geospatial domain has benefitted from technological advances in sensors, methodologies, and processing tools to expand capabilities in mapping applications. Airborne techniques (LiDAR and aerial photogrammetry) generally provide most of the data used for this purpose. However, despite the relevant accuracy of these technologies and the high spatial resolution of airborne data, updates are not sufficiently regular due to significant flight costs and logistics. New possibilities to fill this information gap have emerged with the advent of Very High Resolution (VHR) optical satellite images in the early 2000s. In addition to the high temporal resolution of the cost-effective datasets and their sub-meter geometric resolutions, the synoptic coverage is an unprecedented opportunity for mapping remote areas, multi-temporal analyses, updating datasets and disaster management. For all these reasons, VHR satellite imagery is clearly a relevant study for National Mapping and Cadastral Agencies (NMCAs). This work, supported by EuroSDR, summarises a series of experimental analyses carried out over diverse landscapes to explore the potential of VHR imagery for large-scale mapping
- …