21 research outputs found

    Image Denoising in Mixed Poisson-Gaussian Noise

    Get PDF
    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    Edge Preservation in Nonlinear Diffusion Filtering

    Get PDF
    Edge Preservation in Nonlinear Diffusion Filtering Mohammad Reza Hajiaboli, Ph.D. Concordia University, 2012 Image denoising techniques, in which the process of noise diffusion is modeled as a nonlinear partial differential equation, are well known for providing low-complexity solutions to the denoising problem with a minimal amount of image artifacts. In discrete settings of these nonlinear models, the objective of providing a good noise removal while preserving the image edges is heavily dependent on the kernels, diffusion functions and the associated contrast parameters employed by these nonlinear diffusion techniques. This thesis makes an in-depth study of the roles of the kernels and contrast parameters with a view to providing an effective solution to the problem of denoising of the images contaminated with stationary and signal-dependent noise. Within the above unified theme, this thesis has two major parts. In the first part of this study, the impact of anisotropic behavior of the Laplacian operator on the capabilities of nonlinear diffusion filters in preserving the image edges in different orientations is investigated. Based on this study, an analytical scheme is devised to obtain a spatially-varying kernel that adapt itself to the diffusivity function. The proposed edge-adaptive Laplacian kernel is then incorporated into various nonlinear diffusion filters for denoising of images contaminated by additive white Gaussian noise. The performance optimality of the the existing nonlinear diffusion techniques is generally based on the assumption that the noise and signal are uncorrelated. However, in many applications, such as in medical imaging systems and in remote sensing where the images are degraded by Poisson noise, this assumption is not valid. As such, in the second part of the thesis, a study is undertaken for denoising of images contaminated by Poisson noise within the framework of the Perona-Malik nonlinear diffusion filter. Specifically, starting from a Skellam distribution model of the gradient of the Poisson-noise corrupted images and following the diffusion mechanism of the nonlinear filter, a spatially and temporally varying contrast parameter is designed. It is shown that the nonlinear diffusion filters employing the new Laplacian kernel supports the extremum principle and that the proposed contrast parameter satisfies the sufficient conditions for observance of the scale-space properties. Extensive experiments are performed throughout the thesis to demonstrate the effectiveness and validity of the various schemes and techniques developed in this investigation. The simulation results of applying the new Laplacian kernel to a number of nonlinear diffusion filters show its distinctive advantages over the conventional Rosenfeld and Kak kernel, in terms of the filters' noise reduction and edge preservation capabilities for images corrupted by additive white Gaussian noise. The simulation results of incorporating the proposed spatially- and temporally-varying contrast parameter into the Perona-Malik nonlinear diffusion filter demonstrate a performance much superior to that provided by some of the other state-of-the-art techniques in denoising images corrupted by Poisson noise

    Automatic approach for spot detection in microscopy imaging based on image processing and statistical analysis

    Get PDF
    Abstract: In biological research, fluorescence microscopy has become one of the vital tools used for observation, allowing researchers to study, visualise and image the details of intracel-lular structures which result in better understanding of biology. However, analysis of large numbers of samples is often required to draw statistically verifiable conclusions. Automated methods for analysis of microscopy image data make it possible to handle large datasets, and at the same time reduce the risk of bias imposed by manual techniques in the image analysis pipeline. This work covers automated methods for extracting quan-titative measurements from microscopy images, enabling the detection of spots resulting from different experimental conditions. The work resulted in four main significant con-tributions developed around the microscopy image analysis pipeline. Firstly, an investiga-tion into the importance of spot detection within the automated image analysis pipeline is conducted. Experimental findings show that poor spot detection adversely affected the remainder of the processing pipeline...D.Ing. (Electrical and Electronic Engineering

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Quantitative analysis of microscopy

    Get PDF
    Particle tracking is an essential tool for the study of dynamics of biological processes. The dynamics of these processes happens in three-dimensional (3D) space as the biological structures themselves are 3D. The focus of this thesis is on the development of single particle tracking methods for analysis of the dynamics of biological processes through the use of image processing techniques. Firstly, introduced is a novel particle tracking method that works with two-dimensional (2D) image data. This method uses the theory of Haar-like features for particle detection and trajectory linking is achieved using a combination of three Kalman filters within an interacting multiple models framework. The trajectory linking process utilises an extended state space variable which better describe the morphology and intensity profiles of the particles under investigation at their current position. This tracking method is validated using both 2D synthetically generated images as well as 2D experimentally collected images. It is shown that this method outperforms 14 other stateof-the-art methods. Next this method is used to analyse the dynamics of fluorescently labelled particles using a live-cell fluorescence microscopy technique, specifically a variant of the super-resolution (SR) method PALM, spt-PALM. From this application, conclusions about the organisation of the proteins under investigation at the cell membrane are drawn. Introduced next is a second particle tracking method which is highly efficient and capable of working with both 2D and 3D image data. This method uses a novel Haar-inspired feature for particle detection, drawing inspiration from the type of particles to be detected which are typically circular in 2D space and spherical in 3D image space. Trajectory linking in this method utilises a global nearest neighbour methodology incorporating both motion models to describe the motion of the particles under investigation and a further extended state space variable describing many more aspects of the particles to be linked. This method is validated using a variety of both 2D and 3D synthetic image data. The methods performance is compared with 14 other state-of-the-art methods showing it to be one of the best overall performing methods. Finally, analysis tools to study a SR image restoration method developed by our research group, referred to as Translation Microscopy (TRAM) are investigated [1]. TRAM can be implemented on any standardised microscope and deliver an improvement in resolution of up to 7-fold. However, the results from TRAM and other SR imaging methods require specialised tools to validate and analyse them. Tools have been developed to validate that TRAM performs correctly using a specially designed ground truth. Furthermore, through analysis of results on a biological sample corroborate other published results based on the size of biological structures, showing again that TRAM performs as expected.EPSC

    Smart Nanoscopy: A Review of Computational Approaches to Achieve Super-Resolved Optical Microscopy

    Get PDF
    The field of optical nanoscopy , a paradigm referring to the recent cutting-edge developments aimed at surpassing the widely acknowledged 200nm-diffraction limit in traditional optical microscopy, has gained recent prominence & traction in the 21st century. Numerous optical implementations allowing for a new frontier in traditional confocal laser scanning fluorescence microscopy to be explored (termed super-resolution fluorescence microscopy ) have been realized through the development of techniques such as stimulated emission and depletion (STED) microscopy, photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), amongst others. Nonetheless, it would be apt to mention at this juncture that optical nanoscopy has been explored since the mid-late 20th century, through several computational techniques such as deblurring and deconvolution algorithms. In this review, we take a step back in the field, evaluating the various in silico methods used to achieve optical nanoscopy today, ranging from traditional deconvolution algorithms (such as the Nearest Neighbors algorithm) to the latest developments in the field of computational nanoscopy, founded on artificial intelligence (AI). An insight is provided into some of the commercial applications of AI-based super-resolution imaging, prior to delving into the potentially promising future implications of computational nanoscopy. This is facilitated by recent advancements in the field of AI, deep learning (DL) and convolutional neural network (CNN) architectures, coupled with the growing size of data sources and rapid improvements in computing hardware, such as multi-core CPUs & GPUs, low-latency RAM and hard-drive capacitie
    corecore