1,368 research outputs found

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree โ€“ complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์„ ํ™œ์šฉํ•œ ์ž๋™ ์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ž์—ฐ๊ณผํ•™๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ๊ณ„์‚ฐ๊ณผํ•™์ „๊ณต, 2020. 8. ๊ฐ•๋ช…์ฃผ.Noise removal in digital image data is a fundamental and important task in the field of image processing. The goal of the task is to remove noises from the given degraded images while maintaining essential details such as edges, curves, textures, etc. There have been various attempts on image denoising: mainly model-based methods such as filtering methods, total variation based methods, non-local mean based approaches. Deep learning have been attracting signi๏ฌcant research interest as they have shown better results than the classical methods in almost all fields. Deep learning-based methods use a large amount of data to train a network for its own objective; in the image denoising case, in order to map the corrupted image to a desired clean image. In this thesis we proposed a new network architecture focusing on white Gaussian noise and real noise cancellation. Our model is a deep and wide network designed by constructing a basic block consisting of a mixture of various types of dilated convolutions and repeatedly stacking them. We did not use a batch normal layer to maintain the original own color information of each input data. Also skip connection was utilized so as not to lose the existing information. Through several experiments and comparisons, it was proved that the proposed network has better performance compared to the traditional and latest methods in image denoising.๋””์ง€ํ„ธ ์˜์ƒ ๋ฐ์ดํ„ฐ ๋‚ด์˜ ์žก์Œ ์ œ๊ฑฐ ๋ฐ ๊ฐ์†Œ๋Š” ์—ดํ™”๋œ ์˜์ƒ์˜ ๋…ธ์ด์ฆˆ๋ฅผ ์ œ๊ฑฐํ•˜๋ฉด์„œ ๋ชจ์„œ๋ฆฌ, ๊ณก์„ , ์งˆ๊ฐ ๋“ฑ๊ณผ ๊ฐ™์€ ํ•„์ˆ˜ ์„ธ๋ถ€ ์ •๋ณด๋ฅผ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ ์ธ ์˜์ƒ ์ฒ˜๋ฆฌ ๋ถ„์•ผ์˜ ๊ธฐ๋ณธ์ ์ด๊ณ  ํ•„์ˆ˜์ ์ธ ์ž‘์—…์ด๋‹ค. ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๋ฐฉ๋ฒ•๋“ค์€ ์—ดํ™”๋œ ์˜์ƒ์„ ์›ํ•˜๋Š” ํ’ˆ์งˆ์˜ ์˜์ƒ์œผ๋กœ ๋งคํ•‘ํ•˜๋„๋ก ๋Œ€์šฉ๋Ÿ‰์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ๋„คํŠธ์›Œํฌ๋ฅผ ์ง€๋„ํ•™์Šตํ•˜๋ฉฐ ๊ณ ์ „์ ์ธ ๋ฐฉ๋ฒ•๋“ค๋ณด๋‹ค ๋›ฐ์–ด๋‚œ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋ฏธ์ง€ ๋””๋…ธ์ด์ง•์— ๋Œ€ํ•œ ์—ฌ๋Ÿฌ ๋ฐฉ๋ฒ•๋“ค์„ ์กฐ์‚ฌํ–ˆ์„ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ํŠนํžˆ ๋ฐฑ์ƒ‰ ๊ฐ€์šฐ์‹œ์•ˆ ์žก์Œ๊ณผ ์‹ค์ œ ์žก์Œ ์ œ๊ฑฐ ๋ฌธ์ œ์— ์ง‘์ค‘ํ•˜๋ฉด์„œ ๋„คํŠธ์›Œํฌ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์„ค๊ณ„ํ•˜๊ณ  ์‹คํ—˜ํ•˜์˜€๋‹ค. ์—ฌ๋Ÿฌ ํ˜•ํƒœ์˜ ๋”œ๋ ˆ์ดํ‹ฐ๋“œ ์ฝ˜๋ณผ๋ฃจ์…˜๋“ค์„ ํ˜ผํ•ฉํ•˜์—ฌ ๊ธฐ๋ณธ ๋ธ”๋ก์„ ๊ตฌ์„ฑํ•˜๊ณ  ์ด๋ฅผ ๋ฐ˜๋ณตํ•˜์—ฌ ์Œ“์•„์„œ ์„ค๊ณ„ํ•œ ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•˜์˜€๊ณ , ๊ฐ๊ฐ ๋ณธ์—ฐ์˜ ์ƒ‰์ƒ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์—ฌ๋Ÿฌ ์ž…๋ ฅ ์˜์ƒ์„ ํ•˜๋‚˜๋กœ ๋ฌถ์–ด ๊ตฌ์„ฑํ•˜๋Š” ๋ฐฐ์น˜๋ฅผ ํ‰์ค€ํ™”ํ•˜๋Š” ๋ฐฐ์น˜๋…ธ๋ฉ€ ๋ ˆ์ด์–ด๋Š” ์‚ฌ์šฉํ•˜์ง€ ์•Š์•˜๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๋ธ”๋ก์ด ์—ฌ๋Ÿฌ ์ธต ์ง„ํ–‰๋˜๋Š” ๋™์•ˆ ๊ธฐ์กด์˜ ์ •๋ณด๋ฅผ ์†์‹คํ•˜์ง€ ์•Š๋„๋ก ์Šคํ‚ต ์ปค๋„ฅ์…˜์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ์—ฌ๋Ÿฌ ์‹คํ—˜๊ณผ ๊ธฐ์กด์— ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๊ณผ ์ตœ์‹  ๋ฒค์น˜ ๋งˆํฌ์™€์˜ ๋น„๊ต๋ฅผ ํ†ตํ•˜์—ฌ ์ œ์•ˆํ•œ ๋„คํŠธ์›Œํฌ๊ฐ€ ๋…ธ์ด์ฆˆ ๊ฐ์†Œ ๋ฐ ์ œ๊ฑฐ ์ž‘์—…์—์„œ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋“ค๊ณผ ๋น„๊ตํ•˜์—ฌ ์šฐ์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Œ์„ ์ž…์ฆํ•˜์˜€๋‹ค. ํ•˜์ง€๋งŒ ์ œ์•ˆํ•œ ์•„ํ‚คํ…์ฒ˜๋ฐฉ๋ฒ•์˜ ํ•œ๊ณ„์ ๋„ ๋ช‡ ๊ฐ€์ง€ ์กด์žฌํ•œ๋‹ค. ์ œ์•ˆํ•œ ๋„คํŠธ์›Œํฌ๋Š” ๋‹ค์šด์ƒ˜ํ”Œ๋ง์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Œ์œผ๋กœ์จ ์ •๋ณด ์†์‹ค์„ ์ตœ์†Œํ™”ํ•˜์˜€์ง€๋งŒ ์ตœ์‹  ๋ฒค์น˜๋งˆํฌ์— ๋น„ํ•˜์—ฌ ๋” ๋งŽ์€ ์ถ”๋ก  ์‹œ๊ฐ„์ด ํ•„์š”ํ•˜์—ฌ ์‹ค์‹œ๊ฐ„ ์ž‘์—…์—๋Š” ์ ์šฉํ•˜๊ธฐ๊ฐ€ ์‰ฝ์ง€ ์•Š๋‹ค. ์‹ค์ œ ์˜์ƒ์—๋Š” ๋‹จ์ˆœํ•œ ์žก์Œ๋ณด๋‹ค๋Š” ์˜์ƒ ํš๋“, ์ €์žฅ ๋“ฑ๊ณผ ๊ฐ™์€ ํ”„๋กœ์„ธ์Šค๋ฅผ ๊ฑฐ์น˜๋ฉด์„œ ์—ฌ๋Ÿฌ ์š”์ธ๋“ค๋กœ ์ธํ•œ ๋‹ค์–‘ํ•œ ์žก์Œ, ๋ธ”๋Ÿฌ์™€ ๊ฐ™์€ ์—ดํ™”๊ฐ€ ํ˜ผ์žฌ ๋˜์–ด ์žˆ๋‹ค. ์‹ค์ œ ์žก์Œ์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ๊ฐ๋„์—์„œ์˜ ๋ถ„์„๊ณผ ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋ง ์‹คํ—˜, ๊ทธ๋ฆฌ๊ณ  ์˜์ƒ ์žก์Œ ๋ฐ ๋ธ”๋Ÿฌ, ์••์ถ•๊ณผ ๊ฐ™์€ ๋ณตํ•ฉ ๋ชจ๋ธ๋ง์ด ํ•„์š”ํ•˜๋‹ค. ํ–ฅํ›„์—๋Š” ์ด๋Ÿฌํ•œ ์ ๋“ค์„ ๋ณด์™„ํ•จ์œผ๋กœ์จ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ณ  ๋„คํŠธ์›Œํฌ์˜ ์กฐ์ •์„ ํ†ตํ•ด ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ ์šฉ๋  ์ˆ˜ ์žˆ์Œ์„ ๊ธฐ๋Œ€ํ•œ๋‹ค.1 Introduction 1 2 Review on Image Denoising Methods 4 2.1 Image Noise Models 4 2.2 Traditional Denoising Methods 8 2.2.1 TV-based regularization 9 2.2.2 Non-local regularization 9 2.2.3 Sparse representation 10 2.2.4 Low-rank minimization 10 2.3 CNN-based Denoising Methods 11 2.3.1 DnCNN 11 2.3.2 FFDNet 12 2.3.3 WDnCNN 12 2.3.4 DHDN 13 3 Proposed models 15 3.1 Related Works 15 3.1.1 Residual learning 15 3.1.2 Dilated convolution 16 3.2 Proposed Network Architecture 17 4 Experiments 21 4.1 Training Details 21 4.2 Synthetic Noise Reduction 23 4.2.1 Set12 denoising 24 4.2.2 Kodak24 and BSD68 denoising 30 4.3 Real Noise Reduction 34 4.3.1 DnD test results 35 4.3.2 NTIRE 2020 real image denoising challenge 42 5 Conclusion and Future Works 46 Abstract (in Korean) 54Docto

    Independent component analysis (ICA) applied to ultrasound image processing and tissue characterization

    Get PDF
    As a complicated ubiquitous phenomenon encountered in ultrasound imaging, speckle can be treated as either annoying noise that needs to be reduced or the source from which diagnostic information can be extracted to reveal the underlying properties of tissue. In this study, the application of Independent Component Analysis (ICA), a relatively new statistical signal processing tool appeared in recent years, to both the speckle texture analysis and despeckling problems of B-mode ultrasound images was investigated. It is believed that higher order statistics may provide extra information about the speckle texture beyond the information provided by first and second order statistics only. However, the higher order statistics of speckle texture is still not clearly understood and very difficult to model analytically. Any direct dealing with high order statistics is computationally forbidding. On the one hand, many conventional ultrasound speckle texture analysis algorithms use only first or second order statistics. On the other hand, many multichannel filtering approaches use pre-defined analytical filters which are not adaptive to the data. In this study, an ICA-based multichannel filtering texture analysis algorithm, which considers both higher order statistics and data adaptation, was proposed and tested on the numerically simulated homogeneous speckle textures. The ICA filters were learned directly from the training images. Histogram regularization was conducted to make the speckle images quasi-stationary in the wide sense so as to be adaptive to an ICA algorithm. Both Principal Component Analysis (PCA) and a greedy algorithm were used to reduce the dimension of feature space. Finally, Support Vector Machines (SVM) with Radial Basis Function (RBF) kernel were chosen as the classifier for achieving best classification accuracy. Several representative conventional methods, including both low and high order statistics based methods, and both filtering and non-filtering methods, have been chosen for comparison study. The numerical experiments have shown that the proposed ICA-based algorithm in many cases outperforms other algorithms for comparison. Two-component texture segmentation experiments were conducted and the proposed algorithm showed strong capability of segmenting two visually very similar yet different texture regions with rather fuzzy boundaries and almost the same mean and variance. Through simulating speckle with first order statistics approaching gradually to the Rayleigh model from different non-Rayleigh models, the experiments to some extent reveal how the behavior of higher order statistics changes with the underlying property of tissues. It has been demonstrated that when the speckle approaches the Rayleigh model, both the second and higher order statistics lose the texture differentiation capability. However, when the speckles tend to some non-Rayleigh models, methods based on higher order statistics show strong advantage over those solely based on first or second order statistics. The proposed algorithm may potentially find clinical application in the early detection of soft tissue disease, and also be helpful for better understanding ultrasound speckle phenomenon in the perspective of higher order statistics. For the despeckling problem, an algorithm was proposed which adapted the ICA Sparse Code Shrinkage (ICA-SCS) method for the ultrasound B-mode image despeckling problem by applying an appropriate preprocessing step proposed by other researchers. The preprocessing step makes the speckle noise much closer to the real white Gaussian noise (WGN) hence more amenable to a denoising algorithm such as ICS-SCS that has been strictly designed for additive WGN. A discussion is given on how to obtain the noise-free training image samples in various ways. The experimental results have shown that the proposed method outperforms several classical methods chosen for comparison, including first or second order statistics based methods (such as Wiener filter) and multichannel filtering methods (such as wavelet shrinkage), in the capability of both speckle reduction and edge preservation

    Escaping local minima with derivative-free methods: a numerical investigation

    Full text link
    We apply a state-of-the-art, local derivative-free solver, Py-BOBYQA, to global optimization problems, and propose an algorithmic improvement that is beneficial in this context. Our numerical findings are illustrated on a commonly-used but small-scale test set of global optimization problems and associated noisy variants, and on hyperparameter tuning for the machine learning test set MNIST. As Py-BOBYQA is a model-based trust-region method, we compare mostly (but not exclusively) with other global optimization methods for which (global) models are important, such as Bayesian optimization and response surface methods; we also consider state-of-the-art representative deterministic and stochastic codes, such as DIRECT and CMA-ES. As a heuristic for escaping local minima, we find numerically that Py-BOBYQA is competitive with global optimization solvers for all accuracy/budget regimes, in both smooth and noisy settings. In particular, Py-BOBYQA variants are best performing for smooth and multiplicative noise problems in high-accuracy regimes. As a by-product, some preliminary conclusions can be drawn on the relative performance of the global solvers we have tested with default settings

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases

    Full text link
    Learned optimizers -- neural networks that are trained to act as optimizers -- have the potential to dramatically accelerate training of machine learning models. However, even when meta-trained across thousands of tasks at huge computational expense, blackbox learned optimizers often struggle with stability and generalization when applied to tasks unlike those in their meta-training set. In this paper, we use tools from dynamical systems to investigate the inductive biases and stability properties of optimization algorithms, and apply the resulting insights to designing inductive biases for blackbox optimizers. Our investigation begins with a noisy quadratic model, where we characterize conditions in which optimization is stable, in terms of eigenvalues of the training dynamics. We then introduce simple modifications to a learned optimizer's architecture and meta-training procedure which lead to improved stability, and improve the optimizer's inductive bias. We apply the resulting learned optimizer to a variety of neural network training tasks, where it outperforms the current state of the art learned optimizer -- at matched optimizer computational overhead -- with regard to optimization performance and meta-training speed, and is capable of generalization to tasks far different from those it was meta-trained on.Comment: NeurIPS 202
    • โ€ฆ
    corecore