10 research outputs found
Content-Aware Compressive Sensing Recovery Using Laplacian Scale Mixture Priors and Side Information
Sub-aperture SAR Imaging with Uncertainty Quantification
In the problem of spotlight mode airborne synthetic aperture radar (SAR)
image formation, it is well-known that data collected over a wide azimuthal
angle violate the isotropic scattering property typically assumed. Many
techniques have been proposed to account for this issue, including both
full-aperture and sub-aperture methods based on filtering, regularized least
squares, and Bayesian methods. A full-aperture method that uses a hierarchical
Bayesian prior to incorporate appropriate speckle modeling and reduction was
recently introduced to produce samples of the posterior density rather than a
single image estimate. This uncertainty quantification information is more
robust as it can generate a variety of statistics for the scene. As proposed,
the method was not well-suited for large problems, however, as the sampling was
inefficient. Moreover, the method was not explicitly designed to mitigate the
effects of the faulty isotropic scattering assumption. In this work we
therefore propose a new sub-aperture SAR imaging method that uses a sparse
Bayesian learning-type algorithm to more efficiently produce approximate
posterior densities for each sub-aperture window. These estimates may be useful
in and of themselves, or when of interest, the statistics from these
distributions can be combined to form a composite image. Furthermore, unlike
the often-employed lp-regularized least squares methods, no user-defined
parameters are required. Application-specific adjustments are made to reduce
the typically burdensome runtime and storage requirements so that appropriately
large images can be generated. Finally, this paper focuses on incorporating
these techniques into SAR image formation process. That is, for the problem
starting with SAR phase history data, so that no additional processing errors
are incurred
Recommended from our members
A Novel Inpainting Framework for Virtual View Synthesis
Multi-view imaging has stimulated significant research to enhance the user experience of free viewpoint video, allowing interactive navigation between views and the freedom to select a desired view to watch. This usually involves transmitting both textural and depth information captured from different viewpoints to the receiver, to enable the synthesis of an arbitrary view. In rendering these virtual views, perceptual holes can appear due to certain regions, hidden in the original view by a closer object, becoming visible in the virtual view. To provide a high quality experience these holes must be filled in a visually plausible way, in a process known as inpainting. This is challenging because the missing information is generally unknown and the hole-regions can be large. Recently depth-based inpainting techniques have been proposed to address this challenge and while these generally perform better than non-depth assisted methods, they are not very robust and can produce perceptual artefacts.
This thesis presents a new inpainting framework that innovatively exploits depth and textural self-similarity characteristics to construct subjectively enhanced virtual viewpoints. The framework makes three significant contributions to the field: i) the exploitation of view information to jointly inpaint textural and depth hole regions; ii) the introduction of the novel concept of self-similarity characterisation which is combined with relevant depth information; and iii) an advanced self-similarity characterising scheme that automatically determines key spatial transform parameters for effective and flexible inpainting.
The presented inpainting framework has been critically analysed and shown to provide superior performance both perceptually and numerically compared to existing techniques, especially in terms of lower visual artefacts. It provides a flexible robust framework to develop new inpainting strategies for the next generation of interactive multi-view technologies
Synthetic Aperture Radar (SAR) Meets Deep Learning
This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum