45 research outputs found

    Multi-frame blind deconvolution of atmospheric turbulence degraded images with mixed noise models

    Get PDF
    This paper proposes a mixed noise model and uses the multi-frame blind deconvolution to restore the images of space objects under the Bayesian inference framework. To minimize the cost function, an algorithm based on iterative recursion was proposed. In addition, three limited bandwidth constraints of the point spread functions were imposed into the solution process to avoid converging to local minima. Experimental results show that the proposed algorithm can effectively restore the turbulence degraded images and alleviate the distortion caused by the noise

    Block Matching and Wiener Filtering Approach to Optical Turbulence Mitigation and Its Application to Simulated and Real Imagery with Quantitative Error Analysis

    Get PDF
    We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study

    Binary Classification of an Unknown Object through Atmospheric Turbulence Using a Polarimetric Blind-Deconvolution Algorithm Augmented with Adaptive Degree of Linear Polarization Priors

    Get PDF
    This research develops an enhanced material-classification algorithm to discriminate between metals and dielectrics using passive polarimetric imagery degraded by atmospheric turbulence. To improve the performance of the existing technique for near-normal collection geometries, the proposed algorithm adaptively updates the degree of linear polarization (DoLP) priors as more information becomes available about the scene. Three adaptive approaches are presented. The higher-order super-Gaussian method fits the distribution of DoLP estimates with a sum of two super-Gaussian functions to update the priors. The Gaussian method computes the classification threshold value, from which the priors are updated, by fitting the distribution of DoLP estimates with a sum of two Gaussian functions. Lastly, the distribution-averaging method approximates the threshold value by finding the mean of the DoLP distribution. The experimental results confirm that the new adaptive method significantly extends the collection geometry range of validity for the existing technique

    Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    Get PDF
    Coherent imaging systems offer unique benefits to system operators in terms of resolving power, range gating, selective illumination and utility for applications where passively illuminated targets have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity of the partially coherent light at the imaging detector. The estimator is initially constructed using a shift-invariant system model, and is later extended to the case of a shift-variant optical system by the addition of a transfer function term that quantifies optical blur for wide fields-of-view and atmospheric conditions. The estimators are evaluated using both synthetically generated imagery, as well as experimentally collected image data from an outdoor optical range. The research is extended to consider the effects of weighted frame averaging for the individual short-exposure frames collected by the imaging system. It was found that binary weighting of ensemble frames significantly increases spatial resolution

    On the Simulation and Mitigation of Anisoplanatic Optical Turbulence for Long Range Imaging

    Get PDF
    We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods

    Superresolution imaging: A survey of current techniques

    Full text link
    Cristóbal, G., Gil, E., Šroubek, F., Flusser, J., Miravet, C., Rodríguez, F. B., “Superresolution imaging: A survey of current techniques”, Proceedings of SPIE - The International Society for Optical Engineering, 7074, 2008. Copyright 2008. Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy, and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images. Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution (SR). The stability of these methods depends on having more than one image of the same frame. Differences between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a variational method that minimizes a regularized energy function with respect to the high resolution image and blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described. Comparative experiments on real data illustrate the robustness and utilization of both methods.This research has been partially supported by the following grants: TEC2007-67025/TCM, TEC2006-28009-E, BFI-2003-07276, TIN-2004-04363-C03-03 by the Spanish Ministry of Science and Innovation, and by PROFIT projects FIT-070000-2003-475 and FIT-330100-2004-91. Also, this work has been partially supported by the Czech Ministry of Education under the project No. 1M0572 (Research Center DAR) and by the Czech Science Foundation under the project No. GACR 102/08/1593 and the CSIC-CAS bilateral project 2006CZ002

    Three Channel Polarimetric Based Data Deconvolution

    Get PDF
    A three channel polarimetric deconvolution algorithm was developed to mitigate the degrading effects of atmospheric turbulence in astronomical imagery. Tests were executed using both simulation and laboratory data. The resulting efficacy of the three channel algorithm was compared to a recently developed two channel approach under identical conditions ensuring a fair comparison amongst both algorithms. Two types of simulations were performed. The first was a binary star simulation to compare resulting resolutions between the three and two channel algorithms. The second simulation measured how effective both algorithms could deconvolve a blurred satellite image. The simulation environment assumed the key parameters of Fried\u27s Seeing parameter, , and telescope lens diameters of and . The simulation results showed that the three channel algorithm always reconstructed the true image as good as or better than the two channel approach, while the total squared error was always significantly better for the three channel algorithm. The next step is comparing the two algorithms in the laboratory environment. However, the laboratory imagery was not actually blurred by atmospheric turbulence, but instead camera defocusing was used to simulate the blurring that would be caused by atmospheric turbulence. The results show that the three channel significantly outperforms the two channel in a visual reconstruction of the true image

    Post-Processing Resolution Enhancement of Open Skies Photographic Imagery

    Get PDF
    The Treaty on Opens Skies allows any signatory nation to fly a specifically equipped reconnaissance aircraft anywhere over the territory of any other signatory nation. For photographic images, this treaty allows for a maximum ground resolution of 30 cm. The National Air Intelligence Center (NAIC), which manages implementation of the Open Skies Treaty for the US Air Force, wants to determine if post-processing of the photographic images can improve spatial resolution beyond 30 cm, and if so, determine the improvement achievable. Results presented in this thesis show that standard linear filters (edge and sharpening) do not improve resolution significantly and that super-resolution techniques are necessary. Most importantly, this thesis describes a prior- knowledge model fitting technique that improves resolution beyond the 30 cm treaty limit. The capabilities of this technique are demonstrated for a standard 3-Bar target, an optically degraded 2-Bar target, and the USAF airstar emblem

    Inverse problems in astronomical and general imaging

    Get PDF
    The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object
    corecore