193 research outputs found

    Denoising of Locally Received NOAA images for Remote Sensing Applications

    Get PDF
    Remote Sensing means capturing images of earth’s surface using satellites. Remote Sensing finds its applications in agriculture sector, climate studies, forest fire detection, pollution monitoring and oceanography etc. In this paper, NOAA images are considered as Remote Sensing images. NOAA images are directly received by using L Band antenna, located at Sri Venkateswara University, Tirupati, Andhra Pradesh state, India. The received NOAA images are denoised using spatial and frequency domain denoising techniques with modified soft thresholding. The proposed thresholding technique preserves the green content of the image even after denoising by which accuracy of outcome can be increased in remote sensing applications. Comparison of the performance is done to prove that the proposed techniques are better than existing methods

    EDiffSR: An Efficient Diffusion Probabilistic Model for Remote Sensing Image Super-Resolution

    Full text link
    Recently, convolutional networks have achieved remarkable development in remote sensing image Super-Resoltuion (SR) by minimizing the regression objectives, e.g., MSE loss. However, despite achieving impressive performance, these methods often suffer from poor visual quality with over-smooth issues. Generative adversarial networks have the potential to infer intricate details, but they are easy to collapse, resulting in undesirable artifacts. To mitigate these issues, in this paper, we first introduce Diffusion Probabilistic Model (DPM) for efficient remote sensing image SR, dubbed EDiffSR. EDiffSR is easy to train and maintains the merits of DPM in generating perceptual-pleasant images. Specifically, different from previous works using heavy UNet for noise prediction, we develop an Efficient Activation Network (EANet) to achieve favorable noise prediction performance by simplified channel attention and simple gate operation, which dramatically reduces the computational budget. Moreover, to introduce more valuable prior knowledge into the proposed EDiffSR, a practical Conditional Prior Enhancement Module (CPEM) is developed to help extract an enriched condition. Unlike most DPM-based SR models that directly generate conditions by amplifying LR images, the proposed CPEM helps to retain more informative cues for accurate SR. Extensive experiments on four remote sensing datasets demonstrate that EDiffSR can restore visual-pleasant images on simulated and real-world remote sensing images, both quantitatively and qualitatively. The code of EDiffSR will be available at https://github.com/XY-boy/EDiffSRComment: Submitted to IEEE TGR

    A hamiltonian Monte Carlo method for non-smooth energy sampling

    Get PDF
    International audienceEfficient sampling from high-dimensional distribu- tions is a challenging issue that is encountered in many large data recovery problems. In this context, sampling using Hamil- tonian dynamics is one of the recent techniques that have been proposed to exploit the target distribution geometry. Such schemes have clearly been shown to be efficient for multidimensional sam- pling but, rather, are adapted to distributions from the exponential family with smooth energy functions. In this paper, we address the problem of using Hamiltonian dynamics to sample from probabil- ity distributions having non-differentiable energy functions such as those based on the l1 norm. Such distributions are being used intensively in sparse signal and image recovery applications. The technique studied in this paper uses a modified leapfrog transform involving a proximal step. The resulting nonsmooth Hamiltonian Monte Carlo method is tested and validated on a number of exper- iments. Results show its ability to accurately sample according to various multivariate target distributions. The proposed technique is illustrated on synthetic examples and is applied to an image denoising problem

    Unsupervised Image Restoration Using Partially Linear Denoisers.

    Get PDF
    Deep neural network based methods are the state of the art in various image restoration problems. Standard supervised learning frameworks require a set of noisy measurement and clean image pairs for which a distance between the output of the restoration model and the ground truth, clean images is minimized. The ground truth images, however, are often unavailable or very expensive to acquire in real-world applications. We circumvent this problem by proposing a class of structured denoisers that can be decomposed as the sum of a nonlinear image-dependent mapping, a linear noise-dependent term and a small residual term. We show that these denoisers can be trained with only noisy images under the condition that the noise has zero mean and known variance. The exact distribution of the noise, however, is not assumed to be known. We show the superiority of our approach for image denoising, and demonstrate its extension to solving other restoration problems such as image deblurring where the ground truth is not available. Our method outperforms some recent unsupervised and self-supervised deep denoising models that do not require clean images for their training. For deblurring problems, the method, using only one noisy and blurry observation per image, reaches a quality not far away from its fully supervised counterparts on a benchmark dataset

    Variational Approach for the Reconstruction of Damaged Optical Satellite Images Through Their Co-Registration with Synthetic Aperture Radar

    Get PDF
    In this paper the problem of reconstruction of damaged multi-band opticalimages is studied in the case where we have no information about brightness of suchimages in the damage region. Mostly motivated by the crop field monitoring problem,we propose a new variational approach for exact reconstruction of damaged multi-bandimages using results of their co-registration with Synthetic Aperture Radar (SAR) imagesof the same regions. We discuss the consistency of the proposed problem, give the schemefor its regularization, derive the corresponding optimality system, and describe in detailthe algorithm for the practical implementation of the reconstruction procedure.In this paper the problem of reconstruction of damaged multi-band opticalimages is studied in the case where we have no information about brightness of suchimages in the damage region. Mostly motivated by the crop field monitoring problem,we propose a new variational approach for exact reconstruction of damaged multi-bandimages using results of their co-registration with Synthetic Aperture Radar (SAR) imagesof the same regions. We discuss the consistency of the proposed problem, give the schemefor its regularization, derive the corresponding optimality system, and describe in detailthe algorithm for the practical implementation of the reconstruction procedure

    SONAR Images Denoising

    Get PDF
    International audienc

    Spatial Images Feature Extraction Based on Bayesian Nonlocal Means Filter and Improved Contourlet Transform

    Get PDF
    Spatial images are inevitably mixed with different levels of noise and distortion. The contourlet transform can provide multidimensional sparse representations of images in a discrete domain. Because of its filter structure, the contourlet transform is not translation-invariant. In this paper, we use a nonsubsampled pyramid structure and a nonsubsampled directional filter to achieve multidimensional and translation-invariant image decomposition for spatial images. A nonsubsampled contourlet transform is used as the basis for an improved Bayesian nonlocal means (NLM) filter for different frequencies. The Bayesian model adds a sigma range in image a priori operations, which can be more effective in protecting image details. The NLM filter retains the image edge content and assigns greater weight to similarities for edge pixels. Experimental results both on standard images and spatial images confirm that the proposed algorithm yields significantly better performance than nonsubsampled wavelet transform, contourlet, and curvelet approaches

    State of the Art on Diffusion Models for Visual Computing

    Full text link
    The field of visual computing is rapidly advancing due to the emergence of generative artificial intelligence (AI), which unlocks unprecedented capabilities for the generation, editing, and reconstruction of images, videos, and 3D scenes. In these domains, diffusion models are the generative AI architecture of choice. Within the last year alone, the literature on diffusion-based tools and applications has seen exponential growth and relevant papers are published across the computer graphics, computer vision, and AI communities with new works appearing daily on arXiv. This rapid growth of the field makes it difficult to keep up with all recent developments. The goal of this state-of-the-art report (STAR) is to introduce the basic mathematical concepts of diffusion models, implementation details and design choices of the popular Stable Diffusion model, as well as overview important aspects of these generative AI tools, including personalization, conditioning, inversion, among others. Moreover, we give a comprehensive overview of the rapidly growing literature on diffusion-based generation and editing, categorized by the type of generated medium, including 2D images, videos, 3D objects, locomotion, and 4D scenes. Finally, we discuss available datasets, metrics, open challenges, and social implications. This STAR provides an intuitive starting point to explore this exciting topic for researchers, artists, and practitioners alike
    corecore