88 research outputs found

    Polarimeter Blind Deconvolution Using Image Diversity

    Get PDF
    This research presents an algorithm that improves the ability to view objects using an electro-optical imaging system with at least one polarization sensitive channel in addition to the primary channel. An innovative algorithm for detection and estimation of the defocus aberration present in an image is also developed. Using a known defocus aberration, an iterative polarimeter deconvolution algorithm is developed using a generalized expectation-maximization (GEM) model. The polarimeter deconvolution algorithm is extended to an iterative polarimeter multiframe blind deconvolution (PMFBD) algorithm with an unknown aberration. Using both simulated and laboratory images, the results of the new PMFBD algorithm clearly outperforms an RL-based MFBD algorithm. The convergence rate is significantly faster with better fidelity of reproduction of the targets. Clearly, leveraging polarization data in electro-optical imaging systems has the potential to significantly improve the ability to resolve objects and, thus, improve Space Situation Awareness

    Block Matching and Wiener Filtering Approach to Optical Turbulence Mitigation and Its Application to Simulated and Real Imagery with Quantitative Error Analysis

    Get PDF
    We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study

    Deep learning for anisoplanatic optical turbulence mitigation in long-range imaging

    Get PDF
    We present a deep learning approach for restoring images degraded by atmospheric optical turbulence. We consider the case of terrestrial imaging over long ranges with a wide field-of-view. This produces an anisoplanatic imaging scenario where turbulence warping and blurring vary spatially across the image. The proposed turbulence mitigation (TM) method assumes that a sequence of short-exposure images is acquired. A block matching (BM) registration algorithm is applied to the observed frames for dewarping, and the resulting images are averaged. A convolutional neural network (CNN) is then employed to perform spatially adaptive restoration. We refer to the proposed TM algorithm as the block matching and CNN (BM-CNN) method. Training the CNN is accomplished using simulated data from a fast turbulence simulation tool capable of producing a large amount of degraded imagery from declared truth images rapidly. Testing is done using independent data simulated with a different well-validated numerical wave-propagation simulator. Our proposed BM-CNN TM method is evaluated in a number of experiments using quantitative metrics. The quantitative analysis is made possible by virtue of having truth imagery from the simulations. A number of restored images are provided for subjective evaluation. We demonstrate that the BM-CNN TM method outperforms the benchmark methods in the scenarios tested

    Single-shot blind deconvolution with coded aperture

    Full text link
    In this paper, we present a method for single-shot blind deconvolution incorporating a coded aperture (CA). In this method, we utilize the CA, inserted on the pupil plane, as support constraints in blind deconvolution. Not only an object but also a point spread function of turbulence are estimated from a single captured image by a reconstruction algorithm with the CA support. The proposed method is demonstrated by a simulation and an experiment in which point sources are recovered under severe turbulence

    Binary Classification of an Unknown Object through Atmospheric Turbulence Using a Polarimetric Blind-Deconvolution Algorithm Augmented with Adaptive Degree of Linear Polarization Priors

    Get PDF
    This research develops an enhanced material-classification algorithm to discriminate between metals and dielectrics using passive polarimetric imagery degraded by atmospheric turbulence. To improve the performance of the existing technique for near-normal collection geometries, the proposed algorithm adaptively updates the degree of linear polarization (DoLP) priors as more information becomes available about the scene. Three adaptive approaches are presented. The higher-order super-Gaussian method fits the distribution of DoLP estimates with a sum of two super-Gaussian functions to update the priors. The Gaussian method computes the classification threshold value, from which the priors are updated, by fitting the distribution of DoLP estimates with a sum of two Gaussian functions. Lastly, the distribution-averaging method approximates the threshold value by finding the mean of the DoLP distribution. The experimental results confirm that the new adaptive method significantly extends the collection geometry range of validity for the existing technique

    Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    Get PDF
    Coherent imaging systems offer unique benefits to system operators in terms of resolving power, range gating, selective illumination and utility for applications where passively illuminated targets have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity of the partially coherent light at the imaging detector. The estimator is initially constructed using a shift-invariant system model, and is later extended to the case of a shift-variant optical system by the addition of a transfer function term that quantifies optical blur for wide fields-of-view and atmospheric conditions. The estimators are evaluated using both synthetically generated imagery, as well as experimentally collected image data from an outdoor optical range. The research is extended to consider the effects of weighted frame averaging for the individual short-exposure frames collected by the imaging system. It was found that binary weighting of ensemble frames significantly increases spatial resolution

    Zernike Integrated Partial Phase Error Reduction Algorithm

    Get PDF
    A modification to the error reduction algorithm is reported in this paper for determining the prescription of an imaging system in terms of Zernike polynomials. The technique estimates the Zernike coefficients of the optical prescription as part of a modified Gerchberg-Saxton iteration combined with a new gradient-based phase unwrapping algorithm. Zernike coefficients are updated gradually as the error reduction algorithm converges by recovering the partial pupil phase that differed from the last known pupil phase estimate. In this way the wrapped phase emerging during each iteration of the error reduction algorithm does not represent the entire wrapped phase of the pupil electric field and can be unwrapped with greater ease. The algorithm is tested in conjunction with a blind deconvolution algorithm using measured laboratory data with a known optical prescription and is compared to a baseline approach utilizing a combination of the error reduction algorithm and a least-squares phase unwrapper previously reported in the literature. The combination of the modified error reduction algorithm and the new least-squares Zernike phase unwrapper is shown to produce superior performance for an application where it is desirable that Zernike coefficients be estimated during each iteration of the blind deconvolution procedure

    Improving Closely Spaced Dim Object Detection Through Improved Multiframe Blind Deconvolution

    Get PDF
    This dissertation focuses on improving the ability to detect dim stellar objects that are in close proximity to a bright one, through statistical image processing using short exposure images. The goal is to improve the space domain awareness capabilities with the existing infrastructure. Two new algorithms are developed. The first one is through the Neighborhood System Blind Deconvolution where the data functions are separated into the bright object, the neighborhood system, and the background functions. The second one is through the Dimension Reduction Blind Deconvolution, where the object function is represented by the product of two matrices. Both are designed to overcome the photon counting noise and the random and turbulent atmospheric conditions. The performance of the algorithms are compared with that of the Multi-Frame Blind Deconvolution. The new algorithms are tested and validated with computer generated data. The Neighborhood System Blind Deconvolution is also modified to overcome the undersampling effects since it is validated on the undersampled laboratory collected data. Even though the algorithms are designed for ground to space imaging systems, the same concept can be extended for space to space imaging. This research provides two better techniques to improve closely space dim object detection

    Saturation Behaviors in Deep Turbulence

    Get PDF
    Distributed-volume atmospheric turbulence near the ground significantly limits the performance of incoherent imaging and coherent beam projection systems operating over long horizontal paths. Defense, military and civilian surveillance, border security, and target identification systems are interested in terrestrial imaging and beam projection over very long horizontal paths, but atmospheric turbulence can blur the imagery and aberrate the laser beam such that they are beyond usefulness. While many post-processing and adaptive optics techniques have been developed to mitigate the effects of turbulence, many of these techniques do not work as expected in stronger volumetric turbulence, or in many cases don\u27t work at all. For these techniques to be effective or next generation techniques to be developed, a better theoretical understanding of deep turbulence is necessary. In an attempt to improve understanding of deep turbulence, this work explores the saturation behavior of two features of deep turbulence; the anisoplanatic error and the branch-point density. In this work, the behavior of the anisoplanatic error over long horizontal and slant paths, where the angular extent of the scene is many times greater than the isoplanatic angle, is characterized by developing generalized expressions for the total, piston-removed, and piston-and-tilt-removed anisoplanatic error in non-Kolmogorov turbulence with a finite outer scale. As an outcome of this work it can be concluded that in many cases the anisoplanatic error saturates to a value less than 1 rad2^2. This means that while not actually infinite, the piston-removed and piston-and-tilt-removed isoplanatic angle is often significantly larger than expected. Additionally, power law exponent, outer scale size, scene geometry, and source model play a large part in determining the effective isoplanatic angle. The limit imposed on the system by the anisoplanatic error is much less severe than predicted by classical isoplanatic angle expression, but only if we include the interplay of piston and/or global tilt removal, a finite outer scale, accurate image formation models, and realistic turbulence profiles. Additionally, in this work wave-optics simulations are used to model the branch-point density as a function of turbulence strength, sampling grid resolution, and inner scale. Another outcome of this work is that increasing grid resolution and turbulence strength cause the branch-point density to grow without bound, when no inner scale is used. When a non-zero inner scale is used, via a Hill spectrum, the growth of the branch-point density is significantly reduced as a function of increasing Rytov variance and saturates as a function of increasing inner scale
    corecore