108 research outputs found

    An Unsupervised Generative Neural Approach for InSAR Phase Filtering and Coherence Estimation

    Full text link
    Phase filtering and pixel quality (coherence) estimation is critical in producing Digital Elevation Models (DEMs) from Interferometric Synthetic Aperture Radar (InSAR) images, as it removes spatial inconsistencies (residues) and immensely improves the subsequent unwrapping. Large amount of InSAR data facilitates Wide Area Monitoring (WAM) over geographical regions. Advances in parallel computing have accelerated Convolutional Neural Networks (CNNs), giving them advantages over human performance on visual pattern recognition, which makes CNNs a good choice for WAM. Nevertheless, this research is largely unexplored. We thus propose "GenInSAR", a CNN-based generative model for joint phase filtering and coherence estimation, that directly learns the InSAR data distribution. GenInSAR's unsupervised training on satellite and simulated noisy InSAR images outperforms other five related methods in total residue reduction (over 16.5% better on average) with less over-smoothing/artefacts around branch cuts. GenInSAR's Phase, and Coherence Root-Mean-Squared-Error and Phase Cosine Error have average improvements of 0.54, 0.07, and 0.05 respectively compared to the related methods.Comment: to be published in a future issue of IEEE Geoscience and Remote Sensing Letter

    An interferometric phase noise reduction method based on modified denoising convolutional neural network

    Get PDF
    Traditional interferometric synthetic aperture radar (InSAR) denoising methods normally try to estimate the phase fringes directly from the noisy interferogram. Since the statistics of phase noise are more stable than the phase corresponding to complex terrain, it could be easier to estimate the phase noise. In this paper, phase noises rather than phase fringes are estimated first, and then they are subtracted from the noisy interferometric phase for denoising. The denoising convolutional neural network (DnCNN) is introduced to estimate phase noise and then a modified network called IPDnCNN is constructed for the problem. Based on the IPDnCNN, a novel interferometric phase noise reduction algorithm is proposed, which can reduce phase noise while protecting fringe edges and avoid the use of filter windows. Experimental results using simulated and real data are provided to demonstrate the effectiveness of the proposed method

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Phi-Net: Deep Residual Learning for InSAR Parameters Estimation

    Get PDF
    Nowadays, deep learning (DL) finds application in a large number of scientific fields, among which the estimation and the enhancement of signals disrupted by the noise of different natures. In this article, we address the problem of the estimation of the interferometric parameters from synthetic aperture radar (SAR) data. In particular, we combine convolutional neural networks together with the concept of residual learning to define a novel architecture, named Phi-Net, for the joint estimation of the interferometric phase and coherence. Phi-Net is trained using synthetic data obtained by an innovative strategy based on the theoretical modeling of the physics behind the SAR acquisition principle. This strategy allows the network to generalize the estimation problem with respect to: 1) different noise levels; 2) the nature of the imaged target on the ground; and 3) the acquisition geometry. We then analyze the Phi-Net performance on an independent data set of synthesized interferometric data, as well as on real InSAR data from the TanDEM-X and Sentinel-1 missions. The proposed architecture provides better results with respect to state-of-the-art InSAR algorithms on both synthetic and real test data. Finally, we perform an application-oriented study on the retrieval of the topographic information, which shows that Phi-Net is a strong candidate for the generation of high-quality digital elevation models at a resolution close to the one of the original single-look complex data

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented

    Lensless hyperspectral imaging by Fourier transform spectroscopy for broadband visible light: phase retrieval technique

    Get PDF
    A novel phase retrieval algorithm for broadband hyperspectral phase imaging from noisy intensity observations is proposed. It utilizes advantages of the Fourier Transform spectroscopy in the self-referencing optical setup and provides, additionally beyond spectral intensity distribution, reconstruction of the investigated object's phase. The noise amplification Fellgett's disadvantage is relaxed by the application of sparse wavefront noise filtering embedded in the proposed algorithm. The algorithm reliability is proved by simulation tests and results of physical experiments on transparent objects which demonstrate precise phase imaging and object depth (profile) reconstructions.Comment: 12 pages, 8 figure

    A Low-Complexity Bayesian Estimation Scheme for Speckle Suppression in Images

    Get PDF
    Speckle noise reduction is a crucial pre-processing step for a successful interpretation of images corrupted by speckle noise, and thus, it has drawn a great deal of attention of researchers in the image processing community. The Bayesian estimation is a powerful signal estimation technique and has been widely used for speckle noise removal in images. In the Bayesian estimation based despeckling techniques, the choice of suitable signal and noise models and the development of a shrinkage function for estimation of the signal are the major concerns from the standpoint of the accuracy and computational complexity of the estimation. In this thesis, a low-complexity wavelet-based Bayesian estimation technique for despeckling of images is developed. The main idea of the proposed technique is in establishing suitable statistical models for the wavelet coefficients of additively decomposed components, namely, the reflectance image and the signal-dependant noise, of the multiplicative degradation model of the noisy image and then in using these two statistical models to develop a shrinkage function with a low-complexity realization for the estimation of the wavelet coefficients of the noise-free image. A study is undertaken to explore the effectiveness of using a two sided exponential distribution as a prior statistical model for the discrete wavelet transform (DWT) coefficients of the signal-dependant noise. This model, along with the Cauchy distribution, which is known to be a good model for the wavelet coefficients of the reflectance image, is used to develop a minimum mean square error (MMSE) Bayesian estimator for the DWT coefficients of the noise-free image. A low-cost realization of the shrinkage function resulting from the MMSE Bayesian estimation is proposed and its efficacy is verified from the standpoint of accuracy as well as computational cost. The performance of the proposed despeckling scheme is evaluated on both synthetic and real SAR images in terms of the commonly used metrics, and the results are compared to that of some other state-of-the-art despeckling schemes available in the literature. The experimental results demonstrate the validity of the proposed despeckling scheme in providing a significant reduction in the speckle noise at a very low computational cost and simultaneously in preserving the image details

    Models and Methods for Estimation and Filtering of Signal-Dependent Noise in Imaging

    Get PDF
    The work presented in this thesis focuses on Image Processing, that is the branch of Signal Processing that centers its interest on images, sequences of images, and videos. It has various applications: imaging for traditional cameras, medical imaging, e.g., X-ray and magnetic resonance imaging (MRI), infrared imaging (thermography), e.g., for security purposes, astronomical imaging for space exploration, three-dimensional (video+depth) signal processing, and many more.This thesis covers a small but relevant slice that is transversal to this vast pool of applications: noise estimation and denoising. To appreciate the relevance of this thesis it is essential to understand why noise is such an important part of Image Processing. Every acquisition device, and every measurement is subject to interferences that causes random fluctuations in the acquired signals. If not taken into consideration with a suitable mathematical approach, these fluctuations might invalidate any use of the acquired signal. Consider, for example, an MRI used to detect a possible condition; if not suitably processed and filtered, the image could lead to a wrong diagnosis. Therefore, before any acquired image is sent to an end-user (machine or human), it undergoes several processing steps. Noise estimation and denoising are usually parts of these fundamental steps.Some sources of noise can be removed by suitably modeling the acquisition process of the camera, and developing hardware based on that model. Other sources of noise are instead inevitable: high/low light conditions of the acquired scene, hardware imperfections, temperature of the device, etc. To remove noise from an image, the noise characteristics have to be first estimated. The branch of image processing that fulfills this role is called noise estimation. Then, it is possible to remove the noise artifacts from the acquired image. This process is referred to as denoising.For practical reasons, it is convenient to model noise as random variables. In this way, we assume that the noise fluctuations take values whose probabilities follow specific distributions characterized only by few parameters. These are the parameters that we estimate. We focus our attention on noise modeled by Gaussian distributions, Poisson distributions, or a combination of these. These distributions are adopted for modeling noise affecting images from digital cameras, microscopes, telescopes, radiography systems, thermal cameras, depth-sensing cameras, etc. The parameters that define a Gaussian distribution are its mean and its variance, while a Poisson distribution depends only on its mean, since its variance is equal to the mean (signal-dependent variance). Consequently, the parameters of a Poisson-Gaussian distribution describe the relation between the intensity of the noise-free signal and the variance of the noise affecting it. Degradation models of this kind are referred to as signal-dependent noise.Estimation of signal-dependent noise is commonly performed by processing, individually, groups of pixels with equal intensity in order to sample the aforementioned relation between signal mean and noise variance. Such sampling is often subject to outliers; we propose a robust estimation model where the noise parameters are estimated optimizing a likelihood function that models the local variance estimates from each group of pixels as mixtures of Gaussian and Cauchy distributions. The proposed model is general and applicable to a variety of signal-dependent noise models, including also possible clipping of the data. We also show that, under certain hypotheses, the relation between signal mean and noise variance can also be effectively sampled from groups of pixels of possibly different intensities.Then, we propose a spatially adaptive transform to improve the denoising performance of a specific class of filters, namely nonlocal transformdomain collaborative filters. In particular, the proposed transform exploits the spatial coordinates of nonlocal similar features from an image to better decorrelate the data, and consequently to improve the filtering. Unlike non-adaptive transforms, the proposed spatially adaptive transform is capable of representing spatially smooth coarse-scale variations in the similar features of the image. Further, based on the same paradigm, we propose a method that adaptively enhances the local image features depending on their orientation with respect to the relative coordinates of other similar features at other locations in the image.An established approach for removing Poisson noise utilizes so-called variance-stabilizing transformations (VST) to make the noise variance independent of the mean of the signal, hence enabling denoising by a standard denoiser for additive Gaussian noise. Within this framework, we propose an iterative method where at each iteration the previous estimate is summed back to the noisy image in order to improve the stabilizing performance of the transformation, and consequently to improve the denoising results. The proposed iterative procedure allows to circumvent the typical drawbacks that VSTs experience at very low intensities, and thus allows us to apply the standard denoiser effectively even at extremely low counts.The developed methods achieve state-of-the-art results in their respective field of application

    A reliable order-statistics-based approximate nearest neighbor search algorithm

    Full text link
    We propose a new algorithm for fast approximate nearest neighbor search based on the properties of ordered vectors. Data vectors are classified based on the index and sign of their largest components, thereby partitioning the space in a number of cones centered in the origin. The query is itself classified, and the search starts from the selected cone and proceeds to neighboring ones. Overall, the proposed algorithm corresponds to locality sensitive hashing in the space of directions, with hashing based on the order of components. Thanks to the statistical features emerging through ordering, it deals very well with the challenging case of unstructured data, and is a valuable building block for more complex techniques dealing with structured data. Experiments on both simulated and real-world data prove the proposed algorithm to provide a state-of-the-art performance
    corecore