10 research outputs found

    Direct Unsupervised Denoising

    Full text link
    Traditional supervised denoisers are trained using pairs of noisy input and clean target images. They learn to predict a central tendency of the posterior distribution over possible clean images. When, e.g., trained with the popular quadratic loss function, the network's output will correspond to the minimum mean square error (MMSE) estimate. Unsupervised denoisers based on Variational AutoEncoders (VAEs) have succeeded in achieving state-of-the-art results while requiring only unpaired noisy data as training input. In contrast to the traditional supervised approach, unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate, but allow us to draw samples from the posterior distribution of clean solutions corresponding to the noisy input. To approximate the MMSE estimate during inference, unsupervised methods have to create and draw a large number of samples - a computationally expensive process - rendering the approach inapplicable in many situations. Here, we present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency. Our method achieves results that surpass the results achieved by the unsupervised method at a fraction of the computational cost

    ์ ๋Œ€์  ์ƒ์„ฑ ์‹ ๊ฒฝ๋ง์„ ํ™œ์šฉํ•œ ์‹ค์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2021. 2. ์ด๊ฒฝ๋ฌด.Learning-based image denoising models have been bounded to situations where well-aligned noisy and clean images are given, or training samples can be synthesized from predetermined noise models. While recent generative methods introduce a methodology to accurately simulate the unknown distribution of real-world noise, several limitations still exist. The existing methods are restrained to the case that unrealistic assumptions are made, or the data of actual noise distribution is available. In a real situation, a noise generator should learn to simulate the general and complex noise distribution without using paired noisy and clean images. As a noise generator learned for the real situation tends to fail to express complex noise maps and fits to generate specific texture patterns, we propose an architecture designed to resolve this problem. Therefore, we introduce the C2N, a Clean-to-Noisy image generation framework, to imitate complex real-world noise without using any paired examples. Our C2N combined with a conventional denoising model outperforms existing unsupervised methods on a challenging real-world denoising benchmark by a large margin, validating the effectiveness of the proposed formulation. We also extend our method to a practical situation when there are several data constraints, an area not previously explored by the previous generative noise modeling methods.ํ•™์Šต ๊ธฐ๋ฐ˜ ์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๋ชจ๋ธ์˜ ์‚ฌ์šฉ์€, ์žก์Œ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€๋“ค๊ณผ ๊นจ๋—ํ•œ ์ด๋ฏธ์ง€๋“ค์ด ์ž˜ ์ •๋ ฌ๋œ ์Œ์„ ์ด๋ฃฌ ์ƒํƒœ๋กœ ์ œ๊ณต๋˜๊ฑฐ๋‚˜, ์ฃผ์–ด์ง„ ์žก์Œ์˜ ๋ถ„ํฌ๋กœ๋ถ€ํ„ฐ ํ•™์Šต์šฉ ์ƒ˜ํ”Œ๋“ค์„ ํ•ฉ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ์ƒํ™ฉ์— ํ•œ์ •๋˜์–ด ์žˆ๋‹ค. ์ตœ๊ทผ์˜ ์ƒ์„ฑ๋ชจ๋ธ ๊ธฐ๋ฐ˜์˜ ๋ฐฉ๋ฒ•๋“ค์€ ์‹ค์ œ ์žก์Œ์˜ ๋ถ„ํฌ๊ฐ€ ์•Œ๋ ค์ง€์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋„ ๊ทธ๊ฒƒ์„ ์ •ํ™•ํ•˜๊ฒŒ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ๋„์ž…ํ•˜๊ณ  ์žˆ์ง€๋งŒ, ๋ช‡ ๊ฐ€์ง€ ์ œํ•œ์ ๋“ค์ด ์—ฌ์ „ํžˆ ์กด์žฌํ•œ๋‹ค. ๊ธฐ์กด์˜ ๊ทธ๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ์‹ค์ œ ์žก์Œ์˜ ๋ถ„ํฌ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ์ฃผ์–ด์ง€๊ฑฐ๋‚˜ ์žก์Œ์— ๋Œ€ํ•ด ๋น„ํ˜„์‹ค์ ์ธ ๊ฐ€์ •์ด ๋‚ด๋ ค์ง„ ๊ฒฝ์šฐ๋กœ ์ ์šฉ ๋ฒ”์œ„๊ฐ€ ์ œํ•œ๋˜์—ˆ๋‹ค. ์‹ค์ œ ์ƒํ™ฉ์—์„œ์˜ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ์€ ์žก์Œ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€์™€ ๊นจ๋—ํ•œ ์ด๋ฏธ์ง€์˜ ์Œ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ ๋„ ๋ณต์žกํ•˜๋ฉฐ ์ผ๋ฐ˜์ ์ธ ์žก์Œ์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์‹ค์ œ์  ์ƒํ™ฉ์—์„œ ํ•™์Šตํ•œ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ์€ ๋ณต์žกํ•œ ์žก์Œ์˜ ๋ถ„ํฌ๊ฐ€ ์•„๋‹Œ ํŠน์ • ์งˆ๊ฐ์˜ ํŒจํ„ด์„ ๋งŒ๋“ค์–ด๋‚ด๋Š” ๋™์ž‘์„ ํ•˜๊ฒŒ ๋˜์–ด๋ฒ„๋ฆฌ๊ธฐ ์‰ฝ๊ธฐ์—, ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„ํ•œ ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋ ‡๊ฒŒ ์„ค๊ณ„ํ•œ, C2N ์ฆ‰ Clean-to-Noisy ์˜์ƒ ์ƒ์„ฑ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ๊ฐœ๋ฐœํ•˜์—ฌ ๋ณต์žกํ•œ ์‹ค์˜์ƒ์˜ ์žก์Œ์„ ์–ด๋– ํ•œ ์Œ์„ ์ด๋ฃฌ ํ•™์Šต ๋ฐ์ดํ„ฐ ์—†์ด ๋ชจ๋ฐฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด C2N์„ ๊ธฐ์กด์˜ ์žก์Œ ์ œ๊ฑฐ ๋ชจ๋ธ๊ณผ ๊ฒฐํ•ฉํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‹ค์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๋ฒค์น˜๋งˆํฌ์—์„œ ๊ธฐ์กด์˜ ๋น„๊ฐ๋… ํ•™์Šต ๋ฐฉ๋ฒ•๋“ค์„ ํฐ ํญ์œผ๋กœ ๋Šฅ๊ฐ€ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ์ œ์•ˆ ๋ฐฉ๋ฒ•์˜ ํšจ๊ณผ๋ฅผ ๊ฒ€์ฆํ•œ๋‹ค. ๋˜ํ•œ ์ด์ „์˜ ์žก์Œ ์ƒ์„ฑ๋ชจ๋ธ ๋ฐฉ๋ฒ•๋“ค์— ์˜ํ•ด์„  ํƒ๊ตฌ๋˜์ง€ ์•Š์•˜๋˜ ์˜์—ญ์ธ, ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์—ฌ๋Ÿฌ ์ œ์•ฝ์ด ์žˆ๋Š” ์‹ค์šฉ์  ์ƒํ™ฉ์— ๋Œ€ํ•ด ๋ณธ ๋ฐฉ๋ฒ•์„ ํ™•์žฅํ•œ๋‹ค.Abstract - i Contents - ii List of Tables - iv List of Figures - v 1 INTRODUCTION 1 2 RELATED WORK 5 2.1 Deep Image Denoising 5 2.2 Deep Denoising of Real-World Noise 5 3 C2N: Clean-to-Noisy Image Generation Framework - 8 3.1 Complexity of Real-World Noise 8 3.2 Learning to Generate Pseudo-Noisy Images 9 3.3 C2N Architecture 12 3.3.1 Signal-Independent Pixel-Wise Transforms 12 3.3.2 Signal-Dependent Sampling and Transforms 12 3.3.3 Spatially Correlated Transforms 13 3.3.4 Discriminator 14 3.4 Learning to Denoise with the Generated Pairs 14 4 Experiment 16 4.1 Experimental Setup 16 4.1.1 Dataset 16 4.1.2 Implementation Details and Optimization 17 4.2 Model Analysis 17 4.3 Results on Real-World Noise 23 4.4 Performance Under Practical Data Constraints 26 4.5 Generating noise by interpolation in latent space 30 4.6 Verifying C2N in Denoiser Training 31 5 Conclusion 33 Abstract (In Korean) 40 Acknowlegement 41Maste

    Imaging in focus: An introduction to denoising bioimages in the era of deep learning

    Get PDF
    Fluorescence microscopy enables the direct observation of previously hidden dynamic processes of life, allowing profound insights into mechanisms of health and disease. However, imaging of live samples is fundamentally limited by the toxicity of the illuminating light and images are often acquired using low light conditions. As a consequence, images can become very noisy which severely complicates their interpretation. In recent years, deep learning (DL) has emerged as a very successful approach to remove this noise while retaining the useful signal. Unlike classical algorithms which use well-defined mathematical functions to remove noise, DL methods learn to denoise from example data, providing a powerful content-aware approach. In this review, we first describe the different types of noise that typically corrupt fluorescence microscopy images and introduce the denoising task. We then present the main DL-based denoising methods and their relative advantages and disadvantages. We aim to provide insights into how DL-based denoising methods operate and help users choose the most appropriate tools for their applications

    Performance of deep learning restoration methods for the extraction of particle dynamics in noisy microscopy image sequences

    Get PDF
    Particle tracking in living systems requires low light exposure and short exposure times to avoid phototoxicity and photobleaching and to fully capture particle motion with high-speed imaging. Low-excitation light comes at the expense of tracking accuracy. Image restoration methods based on deep learning dramatically improve the signal-to-noise ratio in low-exposure data sets, qualitatively improving the images. However, it is not clear whether images generated by these methods yield accurate quantitative measurements such as diffusion parameters in (single) particle tracking experiments. Here, we evaluate the performance of two popular deep learning denoising software packages for particle tracking, using synthetic data sets and movies of diffusing chromatin as biological examples. With synthetic data, both supervised and unsupervised deep learning restored particle motions with high accuracy in two-dimensional data sets, whereas artifacts were introduced by the denoisers in three-dimensional data sets. Experimentally, we found that, while both supervised and unsupervised approaches improved tracking results compared with the original noisy images, supervised learning generally outperformed the unsupervised approach. We find that nicer-looking image sequences are not synonymous with more precise tracking results and highlight that deep learning algorithms can produce deceiving artifacts with extremely noisy images. Finally, we address the challenge of selecting parameters to train convolutional neural networks by implementing a frugal Bayesian optimizer that rapidly explores multidimensional parameter spaces, identifying networks yielding optimal particle tracking accuracy. Our study provides quantitative outcome measures of image restoration using deep learning. We anticipate broad application of this approach to critically evaluate artificial intelligence solutions for quantitative microscopy

    Removing Structured Noise With Self-Supervised Blind-Spot Networks

    No full text
    Removal of noise from fluorescence microscopy images is an important first step in many biological analysis pipelines. Current state-of-the-art supervised methods employ convolutional neural networks that are trained with clean (ground-truth) images. Recently, it was shown that self-supervised image denoising with blind spot networks achieves excellent performance even when ground-truth images are not available, as is common in fluorescence microscopy. However, these approaches, e.g. Noise2Void ( N2V), generally assume pixel-wise independent noise, thus limiting their applicability in situations where spatially correlated (structured) noise is present. To overcome this limitation, we present Structured Noise2Void (STRUCTN2V), a generalization of blind spot networks that enables removal of structured noise without requiring an explicit noise model or ground truth data. Specifically, we propose to use an extended blind mask (rather than a single pixel/blind spot), whose shape is adapted to the structure of the noise. We evaluate our approach on two real datasets and show that STRUCTN2V considerably improves the removal of structured noise compared to existing standard and blind-spot based techniques

    Light Microscopy Combined with Computational Image Analysis Uncovers Virus-Specific Infection Phenotypes and Host Cell State Variability

    Get PDF
    Abstract: The study of virus infection phenotypes and variability plays a critical role in understanding viral pathogenesis and host response. Virus-host interactions can be investigated by light and various label-free microscopy methods, which provide a powerful tool for the spatiotemporal analysis of patterns at the cellular and subcellular levels in live or fixed cells. Analysis of microscopy images is increasingly complemented by sophisticated statistical methods and leverages artificial intelligence (AI) to address the tasks of image denoising, segmentation, classification, and tracking. Work in this thesis demonstrates that combining microscopy with AI techniques enables models that accurately detect and quantify viral infection due to the virus-induced cytopathic effect (CPE). Furthermore, it shows that statistical analysis of microscopy image data can disentangle stochastic and deterministic factors that contribute to viral infection variability, such as the cellular state. In summary, the integration of microscopy and computational image analysis offers a powerful and flexible approach for studying virus infection phenotypes and variability, ultimately contributing to a more advanced understanding of infection processes and creating possibilities for the development of more effective antiviral strategies
    corecore