88 research outputs found
A review of parallel computing for large-scale remote sensing image mosaicking
Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further research of image mosaicking parallelism on a large scale. This paper provides a perspective on the current state of image mosaicking parallelization for large scale applications. We firstly introduce the motivation of image mosaicking parallel for large scale application, and analyze the difficulty and problem of parallel image mosaicking at large scale such as scheduling with huge number of dependent tasks, programming with multiple-step procedure, dealing with frequent I/O operation. Then we summarize the existing studies of parallel computing in image mosaicking for large scale applications with respect to problem decomposition and parallel strategy, parallel architecture, task schedule strategy and implementation of image mosaicking parallelization. Finally, the key problems and future potential research directions for image mosaicking are addressed
SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View Representation
SAR images are highly sensitive to observation configurations, and they
exhibit significant variations across different viewing angles, making it
challenging to represent and learn their anisotropic features. As a result,
deep learning methods often generalize poorly across different view angles.
Inspired by the concept of neural radiance fields (NeRF), this study combines
SAR imaging mechanisms with neural networks to propose a novel NeRF model for
SAR image generation. Following the mapping and projection pinciples, a set of
SAR images is modeled implicitly as a function of attenuation coefficients and
scattering intensities in the 3D imaging space through a differentiable
rendering equation. SAR-NeRF is then constructed to learn the distribution of
attenuation coefficients and scattering intensities of voxels, where the
vectorized form of 3D voxel SAR rendering equation and the sampling
relationship between the 3D space voxels and the 2D view ray grids are
analytically derived. Through quantitative experiments on various datasets, we
thoroughly assess the multi-view representation and generalization capabilities
of SAR-NeRF. Additionally, it is found that SAR-NeRF augumented dataset can
significantly improve SAR target classification performance under few-shot
learning setup, where a 10-type classification accuracy of 91.6\% can be
achieved by using only 12 images per class
Fast GO/PO RCS calculation: A GO/PO parallel algorithm implemented on GPU and accelerated using a BVH data structure and the Type 3 Non-Uniform FFT
The purpose of this PhD research was to develop and optimize a fast numeric algorithm able to compute monostatic and bistatic RCS predictions obtaining an accuracy comparable to what commercially available from well-known electromagnetic CADs, but requiring unprecedented computational times. This was realized employing asymptotic approximated methods to solve the scattering problem, namely the Geometrical Optics (GO) and the Physical Optics (PO) theories, and exploiting advanced algorithmical concepts and cutting-edge computing technology to drastically speed-up the computation.
The First Chapter focuses on an historical and operational overview of the concept of Radar Cross Section (RCS), with specific reference to aeronautical and maritime platforms. How geometries and materials influence RCS is also described.
The Second Chapter is dedicated to the first phase of the algorithm: the electromagnetic field transport phase, where the GO theory is applied to implement the “ray tracing”. In this Chapter the first advanced algorithmical concept which was adopted is described: the Bounding Volume Hierarchy (BVH) data structure. Two different BVH approaches and their combination are described and compared.
The Third Chapter is dedicated to the second phase of the calculation: the radiation integral, based on the PO theory, and its numerical optimization. Firstly the Type-3 Non-Uniform Fast Fourier Transform (NUFFT) is presented as the second advanced algorithmical tool that was used and it was indeed the foundation of the calculation of the radiation integral. Then, to improve the performance but also to make the application of the approach feasible in case of electrically large objects, the NUFFT was further optimized using a “pruning” technique, which is a stratagem used to save memory and computational time by avoiding calculating points of the transformed domain that are not of interest.
To validate the algorithm, a preliminary measurement campaign was held at the headquarter of the Ingegneria Dei Sistemi (IDS) Company, located in Pisa. The measurements, performed on canonical scatterers using a Synthetic Aperture Radar (SAR) imaging equipment set up on a planar scanner inside a semi-anechoic chamber, are discussed
SAR Target Image Generation Method Using Azimuth-Controllable Generative Adversarial Network
Sufficient synthetic aperture radar (SAR) target images are very important
for the development of researches. However, available SAR target images are
often limited in practice, which hinders the progress of SAR application. In
this paper, we propose an azimuth-controllable generative adversarial network
to generate precise SAR target images with an intermediate azimuth between two
given SAR images' azimuths. This network mainly contains three parts:
generator, discriminator, and predictor. Through the proposed specific network
structure, the generator can extract and fuse the optimal target features from
two input SAR target images to generate SAR target image. Then a similarity
discriminator and an azimuth predictor are designed. The similarity
discriminator can differentiate the generated SAR target images from the real
SAR images to ensure the accuracy of the generated, while the azimuth predictor
measures the difference of azimuth between the generated and the desired to
ensure the azimuth controllability of the generated. Therefore, the proposed
network can generate precise SAR images, and their azimuths can be controlled
well by the inputs of the deep network, which can generate the target images in
different azimuths to solve the small sample problem to some degree and benefit
the researches of SAR images. Extensive experimental results show the
superiority of the proposed method in azimuth controllability and accuracy of
SAR target image generation
Mapping horizontal and vertical urban densification in Denmark with Landsat time-series from 1985 to 2018: a semantic segmentation solution
Landsat imagery is an unparalleled freely available data source that allows
reconstructing horizontal and vertical urban form. This paper addresses the
challenge of using Landsat data, particularly its 30m spatial resolution, for
monitoring three-dimensional urban densification. We compare temporal and
spatial transferability of an adapted DeepLab model with a simple fully
convolutional network (FCN) and a texture-based random forest (RF) model to map
urban density in the two morphological dimensions: horizontal (compact, open,
sparse) and vertical (high rise, low rise). We test whether a model trained on
the 2014 data can be applied to 2006 and 1995 for Denmark, and examine whether
we could use the model trained on the Danish data to accurately map other
European cities. Our results show that an implementation of deep networks and
the inclusion of multi-scale contextual information greatly improve the
classification and the model's ability to generalize across space and time.
DeepLab provides more accurate horizontal and vertical classifications than FCN
when sufficient training data is available. By using DeepLab, the F1 score can
be increased by 4 and 10 percentage points for detecting vertical urban growth
compared to FCN and RF for Denmark. For mapping the other European cities with
training data from Denmark, DeepLab also shows an advantage of 6 percentage
points over RF for both the dimensions. The resulting maps across the years
1985 to 2018 reveal different patterns of urban growth between Copenhagen and
Aarhus, the two largest cities in Denmark, illustrating that those cities have
used various planning policies in addressing population growth and housing
supply challenges. In summary, we propose a transferable deep learning approach
for automated, long-term mapping of urban form from Landsat images.Comment: Accepted manuscript including appendix (supplementary file
Remote Sensing of the Oceans
This book covers different topics in the framework of remote sensing of the oceans. Latest research advancements and brand-new studies are presented that address the exploitation of remote sensing instruments and simulation tools to improve the understanding of ocean processes and enable cutting-edge applications with the aim of preserving the ocean environment and supporting the blue economy. Hence, this book provides a reference framework for state-of-the-art remote sensing methods that deal with the generation of added-value products and the geophysical information retrieval in related fields, including: Oil spill detection and discrimination; Analysis of tropical cyclones and sea echoes; Shoreline and aquaculture area extraction; Monitoring coastal marine litter and moving vessels; Processing of SAR, HF radar and UAV measurements
- …