149 research outputs found

    Large-scale wave-front reconstruction for adaptive optics systems by use of a recursive filtering algorithm

    Get PDF
    We propose a new recursive filtering algorithm for wave-front reconstruction in a large-scale adaptive optics system. An embedding step is used in this recursive filtering algorithm to permit fast methods to be used for wave-front reconstruction on an annular aperture. This embedding step can be used alone with a direct residual error updating procedure or used with the preconditioned conjugate-gradient method as a preconditioning step. We derive the Hudgin and Fried filters for spectral-domain filtering, using the eigenvalue decomposition method. Using Monte Carlo simulations, we compare the performance of discrete Fourier transform domain filtering, discrete cosine transform domain filtering, multigrid, and alternative-direction-implicit methods in the embedding step of the recursive filtering algorithm. We also simulate the performance of this recursive filtering in a closed-loop adaptive optics system

    Transform domain filtering in incremental and diffusion strategies over distributed networks

    Get PDF
    We analyse incremental and diffusion co-operative schemes in which nodes share information to some neighbour nodes in order to estimate desired parameter of interest locally in the presence of noise. Each node works as an adaptive filter and having its own learning ability. In incremental co-operative fashion a node takes information from previous node and after local estimation the information is sent to next node whereas in diffusion the input is taken from various nodes so that after each iteration the behaviour of distributed network is observed. We employ LMS structure for updating the observations. The convergence performance and computational complexity of LMS-filter is very important consideration for the point of view of speed boost and cost reduction. The convergence performance of a filter depends on eigenvalue spread of covariance matrix of input data or in other words inversely proportional to the eigenvalue spread of the input data. If input data is de-correlated the eigenvalue spread is less and if input data is correlated the eigenvalue spread is more. Transform domain filter has data de-correlation properties of transforms like DCT & DFT. The data de-correlation by the unitary transforms is depends on the orthogonal property of individual transform. Hence we get improved convergence performance by applying transform domain to input data followed by power normalization of input data. If the input data is fully de-correlated the covariance matrix of input data is proportional to the identity matrix

    An overview of multi-filters for eliminating impulse noise for digital images

    Get PDF
    An image through the digitization process is referred to as a digital image. The quality of the digital image may be degenerating due to interferences on the acquisition, transmission, extraction, etc. This attracted the attention of many researchers to study the causes of damage to the information in the image. In addition to finding cause of image damage, the researchers also looking for ways to overcome this problem. There are many filtering techniques that have been introduced to deal the damage to the information in the image. In addition to eliminating noise from the image, filtering techniques also aims to maintain the originality of the features in the image. Among the many research papers on image filtering there is a lack of review papers which are an important to facilitate researchers in understanding the differences in each filtering technique. Additionally, it helps researchers determine the direction of research conducted based on the results of previous research. Therefore, this paper presents a review of several filtering techniques that have been developed so far

    The beneficial techniques in preprocessing step of skin cancer detection system comparing

    Full text link
    © 2014 The Authors. Automatic diagnostics of skin cancer is one of the most challenging problems in medical image processing. It helps physicians to decide whether a skin melanoma is benign or malignant. So, determining the more efficient methods of detection to reduce the rate of errors is a vital issue among researchers. Preprocessing is the first stage of detection to improve the quality of images, removing the irrelevant noises and unwanted parts in the background of the skin images. The purpose of this paper is to gather the preprocessing approaches can be used in skin cancer images. This paper provides good starting for researchers in their automatic skin cancer detections

    The Beneficial Techniques in Preprocessing Step of Skin Cancer Detection System Comparing

    Get PDF
    AbstractAutomatic diagnostics of skin cancer is one of the most challenging problems in medical image processing. It helps physicians to decide whether a skin melanoma is benign or malignant. So, determining the more efficient methods of detection to reduce the rate of errors is a vital issue among researchers. Preprocessing is the first stage of detection to improve the quality of images, removing the irrelevant noises and unwanted parts in the background of the skin images. The purpose of this paper is to gather the preprocessing approaches can be used in skin cancer images. This paper provides good starting for researchers in their automatic skin cancer detections

    Learning Raw Image Denoising with Bayer Pattern Unification and Bayer Preserving Augmentation

    Full text link
    In this paper, we present new data pre-processing and augmentation techniques for DNN-based raw image denoising. Compared with traditional RGB image denoising, performing this task on direct camera sensor readings presents new challenges such as how to effectively handle various Bayer patterns from different data sources, and subsequently how to perform valid data augmentation with raw images. To address the first problem, we propose a Bayer pattern unification (BayerUnify) method to unify different Bayer patterns. This allows us to fully utilize a heterogeneous dataset to train a single denoising model instead of training one model for each pattern. Furthermore, while it is essential to augment the dataset to improve model generalization and performance, we discovered that it is error-prone to modify raw images by adapting augmentation methods designed for RGB images. Towards this end, we present a Bayer preserving augmentation (BayerAug) method as an effective approach for raw image augmentation. Combining these data processing technqiues with a modified U-Net, our method achieves a PSNR of 52.11 and a SSIM of 0.9969 in NTIRE 2019 Real Image Denoising Challenge, demonstrating the state-of-the-art performance. Our code is available at https://github.com/Jiaming-Liu/BayerUnifyAug.Comment: Accepted by CVPRW 201

    An overview of the fundamental approaches that yield several image denoising techniques

    Get PDF
    Digital image is considered as a powerful tool to carry and transmit information between people. Thus, it attracts the attention of large number of researchers, among them those interested in preserving the image features from any factors that may reduce the image quality. One of these factors is the noise which affects the visual aspect of the image and makes others image processing more difficult. Thus far, solving this noise problem remains a challenge for the researchers in this field. A lot of image denoising techniques have been introduced in order to remove the noise by taking care of the image features; in other words, getting the best similarity to the original image from the noisy one. However, the findings are still inconclusive. Beside the enormous amount of researches and studies which adopt several mathematical concepts (statistics, probabilities, modeling, PDEs, wavelet, fuzzy logic, etc.), there is also the scarcity of review papers which carry an important role in the development and progress of research. Thus, this review paper intorduce an overview of the different fundamental approaches that yield the several image-denoising techniques, presented with a new classification. Furthermore, the paper presents the different evaluation tools needed on the comparison between these techniques in order to facilitate the processing of this noise problem, among a great diversity of techniques and concepts

    A Review on Different Image De-noising Methods

    Get PDF
    Image de-noising is a classical yet fundamental problem in low level vision, as well as an ideal test bed to evaluate various statistical image modeling methods. The restoration of a blurry or noisy image is commonly performed with a MAP estimator, which maximizes a posterior probability to reconstruct a clean image from a degraded image. A MAP estimator, when us ed with a sparse gradient image prior, reconstructs piecewise smooth images and typically removes textures that are important for visual realism. One of the most challenging problems in image de - noising is how to preserve the fine scale texture structures while removing noise. Various natural image priors, such as gradient based prior, nonlocal self - similarity prior, and sparsity prior, have been extensively exploited for noise removal. The de - noising algorithms based on these priors, however, tend to smoo th the detailed image textures, degrading the image visual quality. To address this problem, we propose a texture enhanced image de - noising (TEID) method by enforcing the gradient distribution of the de - noised image to be close to the estimated gradient d istribution of the original image. Another method is an alternative de - convolution method called iterative distribution reweighting (IDR) which imposes a global constraint on gradients so that are constructed image should have a gradient distribution simil ar to a reference distribution
    corecore