58 research outputs found

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Coded exposure photography: motion deblurring using fluttered shutter

    Get PDF
    In a conventional single-exposure photograph, moving objects or moving cameras cause motion blur. The exposure time defines a temporal box filter that smears the moving object across the image by convolution. This box filter destroys important high-frequency spatial details so that deblurring via deconvolution becomes an illposed problem. Rather than leaving the shutter open for the entire exposure duration, we ”flutter ” the camera’s shutter open and closed during the chosen exposure time with a binary pseudo-random sequence. The flutter changes the box filter to a broad-band filter that preserves high-frequency spatial details in the blurred image and the corresponding deconvolution becomes a well-posed problem. We demonstrate that manually-specified point spread functions are sufficient for several challenging cases of motionblur removal including extremely large motions, textured backgrounds and partial occluders. ACM Transactions o Graphics (TOG

    Partially Coherent Lab Based X-ray Micro Computed Tomography

    No full text
    X-ray micro computed tomography (CT) is a useful tool for imaging 3-D internal structures. It has many applications in geophysics, biology and materials science. Currently, micro-CT’s capability are limited due to validity of assumptions used in modelling the machines’ physical properties, such as penumbral blurring due to non-point source, and X-ray refraction. Therefore many CT research in algorithms and models are being carried out to overcome these limitations. This thesis presents methods to improve image resolution and noise, and to enable material property estimation of the micro-CT machine developed and in use at the ANU CTLab. This thesis is divided into five chapters as outlined below. The broad background topics of X-ray modelling and CT reconstruction are explored in Chapter 1, as required by later chapters. It describes each X-ray CT component, including the machines used at the ANU CTLab. The mathematical and statistical tools, and electromagnetic physical models are provided and used to characterise the scalar X-ray wave. This scalar wave equation is used to derive the projection operator through matter and free space, and basic reconstruction and phase retrieval algorithms. It quantifies the four types of X-ray interaction with matter for X-ray energy between 1 and 1000 keV, and presents common assumptions used for the modelling of lab based X-ray micro-CT. Chapter 2 is on X-ray source deblurring. The penumbral source blurring for X-ray micro-CT systems are limiting its resolution. This chapter starts with a geometrical framework to model the penumbral source blurring. I have simulated the effect of source blurring, assuming the geometry of the high-cone angle CT system, used at the ANU CTLab. Also, I have developed the Multislice Richardson-Lucy method that overcomes the computational complexity of the conjugate gradient method, while produces less artefacts compared to the standard Richardson-Lucy method. Its performance is demonstrated for both simulated and real experimental data. X-ray refraction, phase contrast and phase retrieval (PR) are investigated in Chapter 3. For weakly attenuating samples, intensity variation due to phase contrast is a significant fraction of the total signal. If phase contrast is incorrectly modelled, the reconstruction would not correctly account the phase contrast, therefore it would contribute to undesirable artefacts in the reconstruction volume. Here I present a novel Linear Iterative multi-energy PR algorithm. It enables material property estimation for the near field submicron X-ray CT system and reduces the noise and artefacts. This PR algorithm expands the validity range in comparison to the single material and data constrained modelling methods. I have also extended this novel PR algorithm to assume a polychromatic incident spectrum for a non-weakly absorbing object. Chapter 4 outlines the space filling X-ray source trajectory and reconstruction, on which I contributed in a minor capacity. This space filling trajectory reconstruction have improved the detector utilisation and reduced nonuniform resolution over the state-of-the-art 3-D Katsevich’s helical reconstruction, this patented work was done in collaboration with FEI Company. Chapter 5 concludes my PhD research work and provides future directions revealed by the present research

    Trajectory Optimization of a Mobile Camera System for Maximizing Optical Character Recognition

    Get PDF
    Camera systems in motion are subject to significant blurring effects that lead to a loss of information during the image capture. This is especially damaging for optical character recognition for which edge preservation is critical to achieving a high recognition rate. Using non-blind motion deblurring, a trajectory and point spread function can be designed to maximize the recognition rate while meeting endpoint constraints. Optimization through the use of radial basis function networks can therefore be used as a way to find ideal trajectories to reduce blurring effects and preserve text sharpness. This work investigates this problem using simulation of a blurred image capture process. The simulation is automated using radial basis function network optimization and a genetic algorithm to determine trajectories with the best recognition rate. Optimized trajectories yielded recognition scores with up to 57.3% improvement in simulation compared to an analogous linear profile. These results were then verified through physical experimentation with a real-world, controlled-blur image capture process that yielded up to 29.4% improvement across the same comparison. Results were then analyzed using spectral analysis to understand why the chosen trajectories preserve text edges. These findings can be applied to a wide variety of controlled mobile camera platforms, such as autonomous automobiles or unmanned aerial vehicles, to improve their ability to gather information from their environment.M.S

    Improved Quantification of Arterial Spin Labelling Images using Partial Volume Correction Techniques

    Get PDF
    Arterial Spin Labelling (ASL) MRI suffers from a phenomenon known as the partial volume effect (PVE), which causes a degradation in accuracy of quantitative perfusion estimates. The effect is caused by inadequate spatial resolution of the imaging system. Resolution of the system is determined by point spread function (PSF) of the imaging process and the voxel grid on which the image is sampled. ASL voxels are comparatively large, which leads to tissue signal mixing within an individual voxel that results in an underestimation of grey matter (GM) and overestimation of white matter (WM) perfusion. PV correction of ASL images is not routinely applied. When PVC is applied, it usually takes the form of correcting for tissue fraction only, often by masking voxels with a partial volume fraction below a certain threshold. There are recent efforts to correct for tissue fraction effect through the use of linear regression or Bayesian inferencing using high resolution tissue posterior probability maps to estimate tissue concentration. This thesis reports an investigation into techniques for PVC of ASL images. An extension to the linear regression method is described, using a 3D kernel to reduce the inherent blurring of this method and preserve spatial detail. An investigation into the application of a Bayesian inferencing toolkit (BASIL) to single timepoint ASL data to estimate GM and WM perfusion in the absence of kinetic information is described. BASIL is found to rely heavily on the spatial prior for perfusion when the number of signal averages is less than three, and is outperformed by linear regression in terms of spatial smoothing until five or more averages are used. An existing method of creating partial volumes estimates from low resolution data is modified to use a voxelwise estimation for the longitudinal relaxation of GM, which improves segmentation estimates in the deep GM structures and improves GM perfusion estimates. An estimate for the width of the PSF for the 3D GRASE imaging sequence used in these studies is made and incorporated into a complete solution for PVC of ASL data, which deblurs the data through the process of deconvolution of the PSF, prior to a correction for the tissue fraction effect. This is found to elevate GM and reduce WM perfusion to a greater extent than correcting for tissue fraction alone, even in the case of a segmented acquisition. The new method for PVC is applied to two clinical cohorts; a Frontal Temporal Dementia and Posterior Cortical Atrophy groups. These two populations exhibit differential patterns of cortical atrophy and reduced tissue metabolism, which remains after PV correction

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Data-Driven Image Restoration

    Get PDF
    Every day many images are taken by digital cameras, and people are demanding visually accurate and pleasing result. Noise and blur degrade images captured by modern cameras, and high-level vision tasks (such as segmentation, recognition, and tracking) require high-quality images. Therefore, image restoration specifically, image deblurring and image denoising is a critical preprocessing step. A fundamental problem in image deblurring is to recover reliably distinct spatial frequencies that have been suppressed by the blur kernel. Existing image deblurring techniques often rely on generic image priors that only help recover part of the frequency spectrum, such as the frequencies near the high-end. To this end, we pose the following specific questions: (i) Does class-specific information offer an advantage over existing generic priors for image quality restoration? (ii) If a class-specific prior exists, how should it be encoded into a deblurring framework to recover attenuated image frequencies? Throughout this work, we devise a class-specific prior based on the band-pass filter responses and incorporate it into a deblurring strategy. Specifically, we show that the subspace of band-pass filtered images and their intensity distributions serve as useful priors for recovering image frequencies. Next, we present a novel image denoising algorithm that uses external, category specific image database. In contrast to existing noisy image restoration algorithms, our method selects clean image “support patches” similar to the noisy patch from an external database. We employ a content adaptive distribution model for each patch where we derive the parameters of the distribution from the support patches. Our objective function composed of a Gaussian fidelity term that imposes category specific information, and a low-rank term that encourages the similarity between the noisy and the support patches in a robust manner. Finally, we propose to learn a fully-convolutional network model that consists of a Chain of Identity Mapping Modules (CIMM) for image denoising. The CIMM structure possesses two distinctive features that are important for the noise removal task. Firstly, each residual unit employs identity mappings as the skip connections and receives pre-activated input to preserve the gradient magnitude propagated in both the forward and backward directions. Secondly, by utilizing dilated kernels for the convolution layers in the residual branch, each neuron in the last convolution layer of each module can observe the full receptive field of the first layer

    Investigation of Personalised Post-Reconstruction Positron Range Correction in 68Ga Positron Emission Tomography Imaging

    Get PDF
    Positron range limits the spatial resolution of Positron Emission Tomography, reducing image quality and accuracy. This thesis investigated factors affecting the magnitude of positron range, developed a personalised approach to range correction, and demonstrated the approach using simulated, phantom and patient data. The Geant4 Application for Emission Tomography software was utilised to model positron range when emitted by radionuclides, namely 18F and 68Ga, in water, bone and lung. The impact of range blurring in lungs was found to be ten times larger than in bone and four times larger than in water or soft tissue, regardless of the positron energy. Range effects occurring with different isotopes (18F and 68Ga) were evaluated across measurement and reconstructed spatial resolutions. It was found that range correction was not necessary when using 18F for voxel sizes larger than 4 mm. In contrast, range correction was required for images generated using 68Ga, particularly within or adjacent to the lung. An iterative, post-reconstruction range correction method was developed which relied only on the measured data. The correction method was validated in both simulation and phantom studies. Image quality and quantification accuracy of corrected images was shown to be superior when imaging with 68Ga. Importantly, the range correction suppressed and controlled image noise at high iteration numbers. Finally, in a patient study, image noise in regions of uniform uptake was significantly increased by ~2% (p<0.05), yet mean standardised uptake values remained unchanged after correction, showing the same uptake for normal radionuclide distributions. The lesion contrast and maximum uptake values were improved by 20% and 45%, respectively with statistical significance (p<0.05). Although these promising results show that the proposed method of range correction can be generalised to reconstructed images regardless of measurement system, acquisition parameters and radionuclides used, further research is warranted to improve the method, particularly with respect to removing or reducing the artefacts which were shown to impacted reader preference
    • …
    corecore