676 research outputs found

    GRDN:Grouped Residual Dense Network for Real Image Denoising and GAN-based Real-world Noise Modeling

    Full text link
    Recent research on image denoising has progressed with the development of deep learning architectures, especially convolutional neural networks. However, real-world image denoising is still very challenging because it is not possible to obtain ideal pairs of ground-truth images and real-world noisy images. Owing to the recent release of benchmark datasets, the interest of the image denoising community is now moving toward the real-world denoising problem. In this paper, we propose a grouped residual dense network (GRDN), which is an extended and generalized architecture of the state-of-the-art residual dense network (RDN). The core part of RDN is defined as grouped residual dense block (GRDB) and used as a building module of GRDN. We experimentally show that the image denoising performance can be significantly improved by cascading GRDBs. In addition to the network architecture design, we also develop a new generative adversarial network-based real-world noise modeling method. We demonstrate the superiority of the proposed methods by achieving the highest score in terms of both the peak signal-to-noise ratio and the structural similarity in the NTIRE2019 Real Image Denoising Challenge - Track 2:sRGB.Comment: To appear in CVPR 2019 workshop. The winners of the NTIRE2019 Challenge on Image Denoising Challenge: Track 2 sRG

    Invertible generative models for inverse problems: mitigating representation error and dataset bias

    Full text link
    Trained generative models have shown remarkable performance as priors for inverse problems in imaging -- for example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because of architectural choices, mode collapse, and bias in the training dataset. In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting. Given a trained generative model, we study the empirical risk formulation of the desired inverse problem under a regularization that promotes high likelihood images, either directly by penalization or algorithmically by initialization. For compressive sensing, invertible priors can yield higher accuracy than sparsity priors across almost all undersampling ratios, and due to their lack of representation error, invertible priors can yield better reconstructions than GAN priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images. We additionally compare performance for compressive sensing to unlearned methods, such as the deep decoder, and we establish theoretical bounds on expected recovery error in the case of a linear invertible model.Comment: Camera ready version for ICML 2020, paper 265

    Low Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss

    Full text link
    In this paper, we introduce a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transform theory, and promises to improve the performance of the GAN. The perceptual loss compares the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN helps migrate the data noise distribution from strong to weak. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task, is capable of not only reducing the image noise level but also keeping the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images

    InverseNet: Solving Inverse Problems with Splitting Networks

    Full text link
    We propose a new method that uses deep learning techniques to solve the inverse problems. The inverse problem is cast in the form of learning an end-to-end mapping from observed data to the ground-truth. Inspired by the splitting strategy widely used in regularized iterative algorithm to tackle inverse problems, the mapping is decomposed into two networks, with one handling the inversion of the physical forward model associated with the data term and one handling the denoising of the output from the former network, i.e., the inverted version, associated with the prior/regularization term. The two networks are trained jointly to learn the end-to-end mapping, getting rid of a two-step training. The training is annealing as the intermediate variable between these two networks bridges the gap between the input (the degraded version of output) and output and progressively approaches to the ground-truth. The proposed network, referred to as InverseNet, is flexible in the sense that most of the existing end-to-end network structure can be leveraged in the first network and most of the existing denoising network structure can be used in the second one. Extensive experiments on both synthetic data and real datasets on the tasks, motion deblurring, super-resolution, and colorization, demonstrate the efficiency and accuracy of the proposed method compared with other image processing algorithms

    Learning Deep Image Priors for Blind Image Denoising

    Full text link
    Image denoising is the process of removing noise from noisy images, which is an image domain transferring task, i.e., from a single or several noise level domains to a photo-realistic domain. In this paper, we propose an effective image denoising method by learning two image priors from the perspective of domain alignment. We tackle the domain alignment on two levels. 1) the feature-level prior is to learn domain-invariant features for corrupted images with different level noise; 2) the pixel-level prior is used to push the denoised images to the natural image manifold. The two image priors are based on H\mathcal{H}-divergence theory and implemented by learning classifiers in adversarial training manners. We evaluate our approach on multiple datasets. The results demonstrate the effectiveness of our approach for robust image denoising on both synthetic and real-world noisy images. Furthermore, we show that the feature-level prior is capable of alleviating the discrepancy between different level noise. It can be used to improve the blind denoising performance in terms of distortion measures (PSNR and SSIM), while pixel-level prior can effectively improve the perceptual quality to ensure the realistic outputs, which is further validated by subjective evaluation

    Blind Deconvolution Microscopy Using Cycle Consistent CNN with Explicit PSF Layer

    Full text link
    Deconvolution microscopy has been extensively used to improve the resolution of the widefield fluorescent microscopy. Conventional approaches, which usually require the point spread function (PSF) measurement or blind estimation, are however computationally expensive. Recently, CNN based approaches have been explored as a fast and high performance alternative. In this paper, we present a novel unsupervised deep neural network for blind deconvolution based on cycle consistency and PSF modeling layers. In contrast to the recent CNN approaches for similar problem, the explicit PSF modeling layers improve the robustness of the algorithm. Experimental results confirm the efficacy of the algorithm

    ์ ๋Œ€์  ์ƒ์„ฑ ์‹ ๊ฒฝ๋ง์„ ํ™œ์šฉํ•œ ์‹ค์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2021. 2. ์ด๊ฒฝ๋ฌด.Learning-based image denoising models have been bounded to situations where well-aligned noisy and clean images are given, or training samples can be synthesized from predetermined noise models. While recent generative methods introduce a methodology to accurately simulate the unknown distribution of real-world noise, several limitations still exist. The existing methods are restrained to the case that unrealistic assumptions are made, or the data of actual noise distribution is available. In a real situation, a noise generator should learn to simulate the general and complex noise distribution without using paired noisy and clean images. As a noise generator learned for the real situation tends to fail to express complex noise maps and fits to generate specific texture patterns, we propose an architecture designed to resolve this problem. Therefore, we introduce the C2N, a Clean-to-Noisy image generation framework, to imitate complex real-world noise without using any paired examples. Our C2N combined with a conventional denoising model outperforms existing unsupervised methods on a challenging real-world denoising benchmark by a large margin, validating the effectiveness of the proposed formulation. We also extend our method to a practical situation when there are several data constraints, an area not previously explored by the previous generative noise modeling methods.ํ•™์Šต ๊ธฐ๋ฐ˜ ์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๋ชจ๋ธ์˜ ์‚ฌ์šฉ์€, ์žก์Œ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€๋“ค๊ณผ ๊นจ๋—ํ•œ ์ด๋ฏธ์ง€๋“ค์ด ์ž˜ ์ •๋ ฌ๋œ ์Œ์„ ์ด๋ฃฌ ์ƒํƒœ๋กœ ์ œ๊ณต๋˜๊ฑฐ๋‚˜, ์ฃผ์–ด์ง„ ์žก์Œ์˜ ๋ถ„ํฌ๋กœ๋ถ€ํ„ฐ ํ•™์Šต์šฉ ์ƒ˜ํ”Œ๋“ค์„ ํ•ฉ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ์ƒํ™ฉ์— ํ•œ์ •๋˜์–ด ์žˆ๋‹ค. ์ตœ๊ทผ์˜ ์ƒ์„ฑ๋ชจ๋ธ ๊ธฐ๋ฐ˜์˜ ๋ฐฉ๋ฒ•๋“ค์€ ์‹ค์ œ ์žก์Œ์˜ ๋ถ„ํฌ๊ฐ€ ์•Œ๋ ค์ง€์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋„ ๊ทธ๊ฒƒ์„ ์ •ํ™•ํ•˜๊ฒŒ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ๋„์ž…ํ•˜๊ณ  ์žˆ์ง€๋งŒ, ๋ช‡ ๊ฐ€์ง€ ์ œํ•œ์ ๋“ค์ด ์—ฌ์ „ํžˆ ์กด์žฌํ•œ๋‹ค. ๊ธฐ์กด์˜ ๊ทธ๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ์‹ค์ œ ์žก์Œ์˜ ๋ถ„ํฌ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ์ฃผ์–ด์ง€๊ฑฐ๋‚˜ ์žก์Œ์— ๋Œ€ํ•ด ๋น„ํ˜„์‹ค์ ์ธ ๊ฐ€์ •์ด ๋‚ด๋ ค์ง„ ๊ฒฝ์šฐ๋กœ ์ ์šฉ ๋ฒ”์œ„๊ฐ€ ์ œํ•œ๋˜์—ˆ๋‹ค. ์‹ค์ œ ์ƒํ™ฉ์—์„œ์˜ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ์€ ์žก์Œ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€์™€ ๊นจ๋—ํ•œ ์ด๋ฏธ์ง€์˜ ์Œ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ ๋„ ๋ณต์žกํ•˜๋ฉฐ ์ผ๋ฐ˜์ ์ธ ์žก์Œ์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์‹ค์ œ์  ์ƒํ™ฉ์—์„œ ํ•™์Šตํ•œ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ์€ ๋ณต์žกํ•œ ์žก์Œ์˜ ๋ถ„ํฌ๊ฐ€ ์•„๋‹Œ ํŠน์ • ์งˆ๊ฐ์˜ ํŒจํ„ด์„ ๋งŒ๋“ค์–ด๋‚ด๋Š” ๋™์ž‘์„ ํ•˜๊ฒŒ ๋˜์–ด๋ฒ„๋ฆฌ๊ธฐ ์‰ฝ๊ธฐ์—, ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„ํ•œ ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋ ‡๊ฒŒ ์„ค๊ณ„ํ•œ, C2N ์ฆ‰ Clean-to-Noisy ์˜์ƒ ์ƒ์„ฑ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ๊ฐœ๋ฐœํ•˜์—ฌ ๋ณต์žกํ•œ ์‹ค์˜์ƒ์˜ ์žก์Œ์„ ์–ด๋– ํ•œ ์Œ์„ ์ด๋ฃฌ ํ•™์Šต ๋ฐ์ดํ„ฐ ์—†์ด ๋ชจ๋ฐฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด C2N์„ ๊ธฐ์กด์˜ ์žก์Œ ์ œ๊ฑฐ ๋ชจ๋ธ๊ณผ ๊ฒฐํ•ฉํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‹ค์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๋ฒค์น˜๋งˆํฌ์—์„œ ๊ธฐ์กด์˜ ๋น„๊ฐ๋… ํ•™์Šต ๋ฐฉ๋ฒ•๋“ค์„ ํฐ ํญ์œผ๋กœ ๋Šฅ๊ฐ€ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ์ œ์•ˆ ๋ฐฉ๋ฒ•์˜ ํšจ๊ณผ๋ฅผ ๊ฒ€์ฆํ•œ๋‹ค. ๋˜ํ•œ ์ด์ „์˜ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ ๋ฐฉ๋ฒ•๋“ค์— ์˜ํ•ด์„  ํƒ๊ตฌ๋˜์ง€ ์•Š์•˜๋˜ ์˜์—ญ์ธ, ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์—ฌ๋Ÿฌ ์ œ์•ฝ์ด ์žˆ๋Š” ์‹ค์šฉ์  ์ƒํ™ฉ์— ๋Œ€ํ•ด ๋ณธ ๋ฐฉ๋ฒ•์„ ํ™•์žฅํ•œ๋‹ค.Abstract - i Contents - ii List of Tables - iv List of Figures - v 1 INTRODUCTION 1 2 RELATED WORK 5 2.1 Deep Image Denoising 5 2.2 Deep Denoising of Real-World Noise 5 3 C2N: Clean-to-Noisy Image Generation Framework - 8 3.1 Complexity of Real-World Noise 8 3.2 Learning to Generate Pseudo-Noisy Images 9 3.3 C2N Architecture 12 3.3.1 Signal-Independent Pixel-Wise Transforms 12 3.3.2 Signal-Dependent Sampling and Transforms 12 3.3.3 Spatially Correlated Transforms 13 3.3.4 Discriminator 14 3.4 Learning to Denoise with the Generated Pairs 14 4 Experiment 16 4.1 Experimental Setup 16 4.1.1 Dataset 16 4.1.2 Implementation Details and Optimization 17 4.2 Model Analysis 17 4.3 Results on Real-World Noise 23 4.4 Performance Under Practical Data Constraints 26 4.5 Generating noise by interpolation in latent space 30 4.6 Verifying C2N in Denoiser Training 31 5 Conclusion 33 Abstract (In Korean) 40 Acknowlegement 41Maste

    An approach to image denoising using manifold approximation without clean images

    Full text link
    Image restoration has been an extensively researched topic in numerous fields. With the advent of deep learning, a lot of the current algorithms were replaced by algorithms that are more flexible and robust. Deep networks have demonstrated impressive performance in a variety of tasks like blind denoising, image enhancement, deblurring, super-resolution, inpainting, among others. Most of these learning-based algorithms use a large amount of clean data during the training process. However, in certain applications in medical image processing, one may not have access to a large amount of clean data. In this paper, we propose a method for denoising that attempts to learn the denoising process by pushing the noisy data close to the clean data manifold, using only noisy images during training. Furthermore, we use perceptual loss terms and an iterative refinement step to further refine the clean images without losing important features

    Legacy Photo Editing with Learned Noise Prior

    Full text link
    There are quite a number of photographs captured under undesirable conditions in the last century. Thus, they are often noisy, regionally incomplete, and grayscale formatted. Conventional approaches mainly focus on one point so that those restoration results are not perceptually sharp or clean enough. To solve these problems, we propose a noise prior learner NEGAN to simulate the noise distribution of real legacy photos using unpaired images. It mainly focuses on matching high-frequency parts of noisy images through discrete wavelet transform (DWT) since they include most of noise statistics. We also create a large legacy photo dataset for learning noise prior. Using learned noise prior, we can easily build valid training pairs by degrading clean images. Then, we propose an IEGAN framework performing image editing including joint denoising, inpainting and colorization based on the estimated noise prior. We evaluate the proposed system and compare it with state-of-the-art image enhancement methods. The experimental results demonstrate that it achieves the best perceptual quality. https://github.com/zhaoyuzhi/Legacy-Photo-Editing-with-Learned-Noise-Prior for the codes and the proposed LP dataset.Comment: accepted by IEEE WACV 2021, 2nd round submissio

    SinGAN: Learning a Generative Model from a Single Natural Image

    Full text link
    We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.Comment: ICCV 201
    • โ€ฆ
    corecore