41 research outputs found

    Polarimeter Blind Deconvolution Using Image Diversity

    Get PDF
    This research presents an algorithm that improves the ability to view objects using an electro-optical imaging system with at least one polarization sensitive channel in addition to the primary channel. An innovative algorithm for detection and estimation of the defocus aberration present in an image is also developed. Using a known defocus aberration, an iterative polarimeter deconvolution algorithm is developed using a generalized expectation-maximization (GEM) model. The polarimeter deconvolution algorithm is extended to an iterative polarimeter multiframe blind deconvolution (PMFBD) algorithm with an unknown aberration. Using both simulated and laboratory images, the results of the new PMFBD algorithm clearly outperforms an RL-based MFBD algorithm. The convergence rate is significantly faster with better fidelity of reproduction of the targets. Clearly, leveraging polarization data in electro-optical imaging systems has the potential to significantly improve the ability to resolve objects and, thus, improve Space Situation Awareness

    Image Deblurring and Near-real-time Atmospheric Seeing Estimation through the Employment of Convergence of Variance

    Get PDF
    A new image reconstruction algorithm is presented that will remove the effect of atmospheric turbulence on motion compensated frame average images. The primary focus of this research was to develop a blind deconvolution technique that could be employed in a tactical military environment where both time and computational power are limited. Additionally, this technique can be employed to measure atmospheric seeing conditions. In a blind deconvolution fashion, the algorithm simultaneously computes a high resolution image and an average model for the atmospheric blur parameterized by Fried’s seeing parameter. The difference in this approach is that it does not assume a prior distribution for the seeing parameter, rather it assesses the convergence of the image’s variance as the stopping criteria and identification of the proper seeing parameter from a range of candidate values. Experimental results show that the convergence of variance technique allows for estimation of the seeing parameter accurate to within 0.5 cm and often even better depending on the signal to noise ratio

    Inverse problems in astronomical and general imaging

    Get PDF
    The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object

    Computational Imaging and its Application

    Get PDF
    Traditional optical imaging systems have constrained angular and spatial resolution, depth of field, field of view, tolerance to aberrations and environmental conditions, and other image quality limitations. Computational imaging provided an opportunity to create new functionality and improve the performance of imaging systems by encoding the information optically and decoding it computationally. The design of a computational imaging system balances hardware costs and the accuracy and complexity of the algorithms. In this thesis, two computational imaging systems are presented: Randomized Aperture Imaging and Laser Suppression Imaging. The former system increases the angular resolution of telescopes by replacing a continuous primary mirror with an array of light-weight small mirror elements, which potentially allows telescopes to have very large diameter at a reduced cost. The latter imaging system protects camera sensors from laser effects such as dazzle by use of a phase coded pupil plane mask. Machine learning and deep learning based algorithms were investigated to restore high-fidelity images from the coded acquisitions. The proposed imaging systems are verified by experiment and numerical modeling, and improved performances are demonstrated in comparison with the state-of-the-art

    Three Channel Polarimetric Based Data Deconvolution

    Get PDF
    A three channel polarimetric deconvolution algorithm was developed to mitigate the degrading effects of atmospheric turbulence in astronomical imagery. Tests were executed using both simulation and laboratory data. The resulting efficacy of the three channel algorithm was compared to a recently developed two channel approach under identical conditions ensuring a fair comparison amongst both algorithms. Two types of simulations were performed. The first was a binary star simulation to compare resulting resolutions between the three and two channel algorithms. The second simulation measured how effective both algorithms could deconvolve a blurred satellite image. The simulation environment assumed the key parameters of Fried\u27s Seeing parameter, , and telescope lens diameters of and . The simulation results showed that the three channel algorithm always reconstructed the true image as good as or better than the two channel approach, while the total squared error was always significantly better for the three channel algorithm. The next step is comparing the two algorithms in the laboratory environment. However, the laboratory imagery was not actually blurred by atmospheric turbulence, but instead camera defocusing was used to simulate the blurring that would be caused by atmospheric turbulence. The results show that the three channel significantly outperforms the two channel in a visual reconstruction of the true image
    corecore