63 research outputs found

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Segmentation-driven optimization for iterative reconstruction in optical projection tomography: an exploration

    Get PDF
    Three-dimensional reconstruction of tomograms from optical projection microscopy is confronted with several drawbacks. In this paper we employ iterative reconstruction algorithms to avoid streak artefacts in the reconstruction and explore possible ways to optimize two parameters of the algorithms, i.e., iteration number and initialization, in order to improve the reconstruction performance. As benchmarks for direct reconstruction evaluation in optical projection tomography are absent, we consider the assessment through the performance of the segmentation on the 3D reconstruction. In our explorative experiments we use the zebrafish model system which is a typical specimen for use in optical projection tomography system; and as such frequently used. In this manner data can be easily obtained from which a benchmark set can be built. For the segmentation approach we apply a two-dimensional U-net convolutional neural network because it is recognized to have a good performance in biomedical image segmentation. In order to prevent the training from getting stuck in local minima, a novel learning rate schema is proposed. This optimization achieves a lower training loss during the training process, as compared to an optimal constant learning rate. Our experiments demonstrate that the approach to the benchmarking of iterative reconstruction via results of segmentation is very useful. It contributes an important tool to the development of computational tools for optical projection tomography.Computer Systems, Imagery and Medi

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented

    A Deconvolution Framework with Applications in Medical and Biological Imaging

    Get PDF
    A deconvolution framework is presented in this thesis and applied to several problems in medical and biological imaging. The framework is designed to contain state of the art deconvolution methods, to be easily expandable and to combine different components arbitrarily. Deconvolution is an inverse problem and in order to cope with its ill-posed nature, suitable regularization techniques and additional restrictions are required. A main objective of deconvolution methods is to restore degraded images acquired by fluorescence microscopy which has become an important tool in biological and medical sciences. Fluorescence microscopy images are degraded by out-of-focus blurring and noise and the deconvolution algorithms to restore these images are usually called deblurring methods. Many deblurring methods were proposed to restore these images in the last decade which are part of the deconvolution framework. In addition, existing deblurring techniques are improved and new components for the deconvolution framework are developed. A considerable improvement could be obtained by combining a state of the art regularization technique with an additional non-negativity constraint. A real biological screen analysing a specific protein in human cells is presented and shows the need to analyse structural information of fluorescence images. Such an analysis requires a good image quality which is the aim of the deblurring methods if the required image quality is not given. For a reliable understanding of cells and cellular processes, high resolution 3D images of the investigated cells are necessary. However, the ability of fluorescence microscopes to image a cell in 3D is limited since the resolution along the optical axis is by a factor of three worse than the transversal resolution. Standard microscopy image deblurring techniques are able to improve the resolution but the problem of a lower resolution in direction along the optical axis remains. It is however possible to overcome this problem using Axial Tomography providing tilted views of the object by rotating it under the microscope. The rotated images contain additional information about the objects which can be used to improve the resolution along the optical axis. In this thesis, a sophisticated method to reconstruct a high resolution Axial Tomography image on basis of the developed deblurring methods is presented. The deconvolution methods are also used to reconstruct the dose distribution in proton therapy on basis of measured PET images. Positron emitters are activated by proton beams but a PET image is not directly proportional to the delivered radiation dose distribution. A PET signal can be predicted by a convolution of the planned dose with specific filter functions. In this thesis, a dose reconstruction method based on PET images which reverses the convolution approach is presented and the potential to reconstruct the actually delivered dose distribution from measured PET images is investigated. Last but not least, a new denoising method using higher-order statistic information of a given Gaussian noise signal is presented and compared to state of the art denoising methods

    Hyperspectral Tomographic FTIR Imaging Using Two Illumination Geometries for Polymer Phantoms

    Get PDF
    The purpose of this dissertation is to carry out non-destructive 3D imaging by applying Fourier Transform Infrared (FTIR) spectro-microtomographic techniques, and develop corresponding methods of data analysis. This is done by collecting 3D synchrotron-based and lab-based (Thermal) FTIR hyper spectral data at the Synchrotron Radiation Center (SRC) for the first time. Despite other 2D imaging techniques, this does not manipulate the sample, and suppresses the need to microtome 3D biological, material or biomedical samples into slices to study by spectroscopic imaging techniques. Spectro-micro-tomography is applicable for scientific, industrial, energy, biomedical samples such as stem cell characterization and materials such as polymers. Tomographic reconstruction methods are employed to the data to investigate the chemical and morphological localization, and obtain the average spectra of regions of interest as well as spectra for every voxel. It is assumed that the thermal light has cone geometry, and the data collected needs cone beam reconstruction, whereas the data collected using synchrotron light requires parallel beam reconstruction, since the beam waist created by the focus at IR wavelengths of the synchrotron 12 beams can be approximated well by a parallel beam. While bright synchrotron light provides us with higher SNR data, the capability of doing FTIR spectro-micro-tomographic techniques using thermal light, processing and analyzing it is of a high significance since thermal sources are more readily available. In this study the cone beam reconstruction is implemented and evaluated by applying them to the phantoms such as centered and off-center Polystyrene beads, and samples of mixed-polymers. The results of the cone beam reconstruction show that the cone beam reconstruction does not improve the quality of the reconstruction, and the parallel beam reconstruction is still better. The cone beam is not capable of modelling the optical system of our imaging environment, and the half cone beam angle size is small enough to be considered as parallel beam. Furthermore, the application of the cone beam is limited to the size of the sample. For further analysis of the 3D reconstructed volumes of the samples, specific signal processing tools are required. The deconvolution algorithm is applied to the 2D projections at all the wavelengths before the reconstruction to increase the image contrast and spectral fidelity, deblur the projections, and finally increase the contrast of the 3D images. Segmentation methods will be implemented for defining the regions of interest in the 3D structures; this will be used for average spectrum computation as a necessary tool of spectral analysis. The techniques developed here employ thresholding and kmeans clustering are capable of calculating the average spectra of the components found in the data as well as their corresponding renderings

    TRIPs-Py: Techniques for Regularization of Inverse Problems in Python

    Full text link
    In this paper, we describe TRIPs-Py, a new Python package of linear discrete inverse problems solvers and test problems. The goal of the package is two-fold: 1) to provide tools for solving small and large-scale inverse problems, and 2) to introduce test problems arising from a wide range of applications. The solvers available in TRIPs-Py include direct regularization methods (such as truncated singular value decomposition and Tikhonov) and iterative regularization techniques (such as Krylov subspace methods and recent solvers for â„“p\ell_p-â„“q\ell_q formulations, which enforce sparse or edge-preserving solutions and handle different noise types). All our solvers have built-in strategies to define the regularization parameter(s). Some of the test problems in TRIPs-Py arise from simulated image deblurring and computerized tomography, while other test problems model realistic problems in dynamic computerized tomography. Numerical examples are included to illustrate the usage as well as the performance of the described methods on the provided test problems. To the best of our knowledge, TRIPs-Py is the first Python software package of this kind, which may serve both research and didactical purposes.Comment: 27 pages, 10 figures, 3 table

    Non-Standard Imaging Techniques

    Get PDF
    The first objective of the thesis is to investigate the problem of reconstructing a small-scale object (a few millimeters or smaller) in 3D. In Chapter 3, we show how this problem can be solved effectively by a new multifocus multiview 3D reconstruction procedure which includes a new Fixed-Lens multifocus image capture and a calibrated image registration technique using analytic homography transformation. The experimental results using the real and synthetic images demonstrate the effectiveness of the proposed solutions by showing that both the fixed-lens image capture and multifocus stacking with calibrated image alignment significantly reduce the errors in the camera poses and produce more complete 3D reconstructed models as compared with those by the conventional moving lens image capture and multifocus stacking. The second objective of the thesis is modelling the dual-pixel (DP) camera. In Chapter 4, to understand the potential of the DP sensor for computer vision applications, we study the formation of the DP pair which links the blur and the depth information. A mathematical DP model is proposed which can benefit depth estimation by the blur. These explorations motivate us to propose an end-to-end DDDNet (DP-based Depth and Deblur Network) to jointly estimate the depth and restore the image . Moreover, we define a reblur loss, which reflects the relationship of the DP image formation process with depth information, to regularize our depth estimate in training. To meet the requirement of a large amount of data for learning, we propose the first DP image simulator which allows us to create datasets with DP pairs from any existing RGBD dataset. As a side contribution, we collect a real dataset for further research. Extensive experimental evaluation on both synthetic and real datasets shows that our approach achieves competitive performance compared to state-of-the-art approaches. Another (third) objective of this thesis is to tackle the multifocus image fusion problem, particularly for long multifocus image sequences. Multifocus image stacking/fusion produces an in-focus image of a scene from a number of partially focused images of that scene in order to extend the depth of field. One of the limitations of the current state of the art multifocus fusion methods is not considering image registration/alignment before fusion. Consequently, fusing unregistered multifocus images produces an in-focus image containing misalignment artefacts. In Chapter 5, we propose image registration by projective transformation before fusion to remove the misalignment artefacts. We also propose a method based on 3D deconvolution to retrieve the in-focus image by formulating the multifocus image fusion problem as a 3D deconvolution problem. The proposed method achieves superior performance compared to the state of the art methods. It is also shown that, the proposed projective transformation for image registration can improve the quality of the fused images. Moreover, we implement a multifocus simulator to generate synthetic multifocus data from any RGB-D dataset. The fourth objective of this thesis is to explore new ways to detect the polarization state of light. To achieve the objective, in Chapter 6, we investigate a new optical filter namely optical rotation filter for detecting the polarization state with a fewer number of images. The proposed method can estimate polarization state using two images, one with the filter and another without. The accuracy of estimating the polarization parameters using the proposed method is almost similar to that of the existing state of the art method. In addition, the feasibility of detecting the polarization state using only one RGB image captured with the optical rotation filter is also demonstrated by estimating the image without the filter from the image with the filter using a generative adversarial network

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Digital Image Processing

    Get PDF
    Newspapers and the popular scientific press today publish many examples of highly impressive images. These images range, for example, from those showing regions of star birth in the distant Universe to the extent of the stratospheric ozone depletion over Antarctica in springtime, and to those regions of the human brain affected by Alzheimer’s disease. Processed digitally to generate spectacular images, often in false colour, they all make an immediate and deep impact on the viewer’s imagination and understanding. Professor Jonathan Blackledge’s erudite but very useful new treatise Digital Image Processing: Mathematical and Computational Methods explains both the underlying theory and the techniques used to produce such images in considerable detail. It also provides many valuable example problems - and their solutions - so that the reader can test his/her grasp of the physical, mathematical and numerical aspects of the particular topics and methods discussed. As such, this magnum opus complements the author’s earlier work Digital Signal Processing. Both books are a wonderful resource for students who wish to make their careers in this fascinating and rapidly developing field which has an ever increasing number of areas of application. The strengths of this large book lie in: • excellent explanatory introduction to the subject; • thorough treatment of the theoretical foundations, dealing with both electromagnetic and acoustic wave scattering and allied techniques; • comprehensive discussion of all the basic principles, the mathematical transforms (e.g. the Fourier and Radon transforms), their interrelationships and, in particular, Born scattering theory and its application to imaging systems modelling; discussion in detail - including the assumptions and limitations - of optical imaging, seismic imaging, medical imaging (using ultrasound), X-ray computer aided tomography, tomography when the wavelength of the probing radiation is of the same order as the dimensions of the scatterer, Synthetic Aperture Radar (airborne or spaceborne), digital watermarking and holography; detail devoted to the methods of implementation of the analytical schemes in various case studies and also as numerical packages (especially in C/C++); • coverage of deconvolution, de-blurring (or sharpening) an image, maximum entropy techniques, Bayesian estimators, techniques for enhancing the dynamic range of an image, methods of filtering images and techniques for noise reduction; • discussion of thresholding, techniques for detecting edges in an image and for contrast stretching, stochastic scattering (random walk models) and models for characterizing an image statistically; • investigation of fractal images, fractal dimension segmentation, image texture, the coding and storing of large quantities of data, and image compression such as JPEG; • valuable summary of the important results obtained in each Chapter given at its end; • suggestions for further reading at the end of each Chapter. I warmly commend this text to all readers, and trust that they will find it to be invaluable. Professor Michael J Rycroft Visiting Professor at the International Space University, Strasbourg, France, and at Cranfield University, England
    • …
    corecore