153 research outputs found

    Efficient Methodologies for Single-Image Blind Deconvolution and Deblurring

    Get PDF

    Development of bioinformatics tools to track cancer cell invasion using 3D in vitro invasion assays

    Get PDF
    Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    Blind Image Deconvolution using Approximate Greatest Common Divisor and Approximate Polynomial Factorisation

    Get PDF
    Images play a significant and important role in diverse areas of everyday modern life. Examples of the areas where the use of images is routine include medicine, forensic investigations, engineering applications and astronomical science. The procedures and methods that depend on image processing would benefit considerably from images that are free of blur. Most images are unfortunately affected by noise and blur that result from the practical limitations of image sourcing systems. The blurring and noise effects render the image less useful. An efficient method for image restoration is hence important for many applications. Restoration of true images from blurred images is the inverse of the naturally occurring problem of true image convolution through a blurring function. The deconvolution of images from blurred images is a non-trivial task. One challenge is that the computation of the mathematical function that represents the blurring process, which is known as the point spread function (PSF), is an ill-posed problem, i.e. an infinite number of solutions are possible for given inexact data. The blind image deconvolution (BID) problem is the central subject of this thesis. There are a number of approaches for solving the BID problem, including statistical methods and linear algebraic methods. The approach adopted in this research study for solving this problem falls within the class of linear algebraic methods. Polynomial linear algebra offers a way of computing the PSF size and its components without requiring any prior knowledge about the true image and the blurring PSF. This research study has developed a BID method for image restoration based on the approximate greatest common divisor (AGCD) algorithms, specifically, the approximate polynomial factorization (APF) algorithm of two polynomials. The developed method uses the Sylvester resultant matrix algorithm in the computation of the AGCD and the QR decomposition for computing the degree of the AGCD. It is shown that the AGCD is equal to the PSF and the deblurred image can be computed from the coprime polynomials. In practice, the PSF can be spatially variant or invariant. PSF spatial invariance means that the blurred image pixels are the convolution of the true image pixels and the same PSF. Some of the PSF bivariate functions, in particular, separable functions, can be further simplified as the multiplication of two univariate polynomials. This research study is focused on the invariant separable and non-separable PSF cases. The performance of state-of-the-art image restoration methods varies in terms of computational speed and accuracy. In addition, most of these methods require prior knowledge about the true image and the blurring function, which in a significant number of applications is an impractical requirement. The development of image restoration methods that require no prior knowledge about the true image and the blurring functions is hence desirable. Previous attempts at developing BID methods resulted in methods that have a robust performance against noise perturbations; however, their good performance is limited to blurring functions of small size. In addition, even for blurring functions of small size, these methods require the size of the blurring functions to be known and an estimate of the noise level to be present in the blurred image. The developed method has better performance than all the other state-of-the-art methods, in particular, it determines the correct size and coefficients of the PSF and then uses it to recover the original image. It does not require any prior knowledge about the PSF, which is a prerequisite for all the other methods

    A Computational Framework for the Structural Change Analysis of 3D Volumes of Microscopic Specimens

    Get PDF
    Glaucoma, commonly observed with an elevation in the intraocular pressure level (IOP), is one of the leading causes of blindness. The lamina cribrosa is a mesh-like structure that provides axonal support for the optic nerves leaving the eye. The changes in the laminar structure under IOP elevations may result in the deaths of retinal ganglion cells, leading to vision degradation and loss. We have developed a comprehensive computational framework that can assist the study of structural changes in microscopic structures such as lamina cribrosa. The optical sectioning property of a confocal microscope facilitates imaging thick microscopic specimen at various depths without physical sectioning. The confocal microscope images are referred to as optical sections. The computational framework developed includes: 1) a multi-threaded system architecture for tracking a volume-of-interest within a microscopic specimen in a parallel computation environment using a reliable-multicast for collective-communication operations 2) a Karhunen-Loève (KL) expansion based adaptive noise prefilter for the restoration of the optical sections using an inverse restoration method 3) a morphological operator based ringing metric to quantify the ringing artifacts introduced during iterative restoration of optical sections 4) a l2 norm based error metric to evaluate the performance of optical flow algorithms without a priori knowledge of the true motion field and 5) a Compute-and-Propagate (CNP) framework for iterative optical flow algorithms. The realtime tracking architecture can convert a 2D-confocal microscope into a 4D-confocal microscope with tracking. The adaptive KL filter is suitable for realtime restoration of optical sections. The CNP framework significantly improves the speed and convergence of the iterative optical flow algorithms. Also, the CNP framework can reduce the errors in the motion field estimates due to the aperture problem. The performance of the proposed framework is demonstrated on real-life image sequences and on z-Stack datasets of random cotton fibers and lamina cribrosa of a cow retina with an experimentally induced glaucoma. The proposed framework can be used for routine laboratory and clinical investigation of microstructures such as cells and tissues, for the evaluation of complex structures such as cornea and has potential use as a surgical guidance tool

    Motion blur in digital images - analys, detection and correction of motion blur in photogrammetry

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset. This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming

    An Efficient Image Segmentation Approach through Enhanced Watershed Algorithm

    Get PDF
    Image segmentation is a significant task for image analysis which is at the middle layer of image engineering. The purpose of segmentation is to decompose the image into parts that are meaningful with respect to a particular application. The proposed system is to boost the morphological watershed method for degraded images. Proposed algorithm is based on merging morphological watershed result with enhanced edge detection result obtain on pre processing of degraded images. As a post processing step, to each of the segmented regions obtained, color histogram algorithm is applied, enhancing the overall performance of the watershed algorithm. Keywords – Segmentation, watershed, color histogra

    High-Throughput Image Analysis of Zebrafish Models of Parkinson’s Disease

    Get PDF

    Modulation Transfer Function Compensation Through A Modified Wiener Filter For Spatial Image Quality Improvement.

    Get PDF
    Kebergunaan data imej yang diperolehi dari suatu sensor pengimejan amat bergantung kepada keupayaan sensor tersebut untuk meresolusikan perincian spatial ke satu tahap yang boleh diterima. The usefulness of image data acquired from an imaging sensor critically depends on the ability of the sensor to resolve spatial details to an acceptable level

    PSF Sampling in Fluorescence Image Deconvolution

    Get PDF
    All microscope imaging is largely affected by inherent resolution limitations because of out-of-focus light and diffraction effects. The traditional approach to restoring the image resolution is to use a deconvolution algorithm to “invert” the effect of convolving the volume with the point spread function. However, these algorithms fall short in several areas such as noise amplification and stopping criterion. In this paper, we try to reconstruct an explicit volumetric representation of the fluorescence density in the sample and fit a neural network to the target z-stack to properly minimize a reconstruction cost function for an optimal result. Additionally, we do a weighted sampling of the point spread function to avoid unnecessary computations and prioritize non-zero signals. In a baseline comparison against the Richardson-Lucy method, our algorithm outperforms RL for images affected with high levels of noise
    corecore