1,154 research outputs found

    Restoration of Atmospheric Turbulence Degraded Video using Kurtosis Minimization and Motion Compensation

    Get PDF
    In this thesis work, the background of atmospheric turbulence degradation in imaging was reviewed and two aspects are highlighted: blurring and geometric distortion. The turbulence burring parameter is determined by the atmospheric turbulence condition that is often unknown; therefore, a blur identification technique was developed that is based on a higher order statistics (HOS). It was observed that the kurtosis generally increases as an image becomes blurred (smoothed). Such an observation was interpreted in the frequency domain in terms of phase correlation. Kurtosis minimization based blur identification is built upon this observation. It was shown that kurtosis minimization is effective in identifying the blurring parameter directly from the degraded image. Kurtosis minimization is a general method for blur identification. It has been tested on a variety of blurs such as Gaussian blur, out of focus blur as well as motion blur. To compensate for the geometric distortion, earlier work on the turbulent motion compensation was extended to deal with situations in which there is camera/object motion. Trajectory smoothing is used to suppress the turbulent motion while preserving the real motion. Though the scintillation effect of atmospheric turbulence is not considered separately, it can be handled the same way as multiple frame denoising while motion trajectories are built.Ph.D.Committee Chair: Mersereau, Russell; Committee Co-Chair: Smith, Mark; Committee Member: Lanterman, Aaron; Committee Member: Wang, May; Committee Member: Tannenbaum, Allen; Committee Member: Williams, Dougla

    BATUD: Blind Atmospheric TUrbulence Deconvolution

    Get PDF
    A new blind image deconvolution technique is developed for atmospheric turbulence deblurring. The originality of the proposed approach relies on an actual physical model, known as the Fried kernel, that quantifies the impact of the atmospheric turbulence on the optical resolution of images. While the original expression of the Fried kernel can seem cumbersome at first sight, we show that it can be reparameterized in a much simpler form. This simple expression allows us to efficiently embed this kernel in the proposed Blind Atmospheric TUrbulence Deconvolution (BATUD) algorithm. BATUD is an iterative algorithm that alternately performs deconvolution and estimates the Fried kernel by jointly relying on a Gaussian Mixture Model prior of natural image patches and controlling for the square Euclidean norm of the Fried kernel. Numerical experiments show that our proposed blind deconvolution algorithm behaves well in different simulated turbulence scenarios, as well as on real images. Not only BATUD outperforms state-of-the-art approaches used in atmospheric turbulence deconvolution in terms of image quality metrics, but is also faster

    Restoration of Videos Degraded by Local Isoplanatism Effects in the Near-Infrared Domain

    Get PDF
    When observing a scene horizontally at a long distance in the near-infrared domain, degradations due to atmospheric turbulence often occur. In our previous work, we presented two hybrid methods to restore videos degraded by such local perturbations. These restoration algorithms take advantages of a space-time Wiener filter and a space-time regularization by the Laplacian operator. Wiener and Laplacian regularization results are mixed differently depending on the distance between the current pixel and the nearest edge point. It was shown that a gradation between Wiener and Laplacian areas improves results quality, so that only the algorithm using a gradation will be used in this article. In spite of a significant improvement in the obtained images quality, our restoration results greatly depend on the segmentation image used in the video processing. We then propose a method to select automatically the best segmentation image

    Patch-based gaussian mixture model for scene motion detection in the presence of atmospheric optical turbulence

    Get PDF
    In long-range imaging regimes, atmospheric turbulence degrades image quality. In addition to blurring, the turbulence causes geometric distortion effects that introduce apparent motion in acquired video. This is problematic for image processing tasks, including image enhancement and restoration (e.g., superresolution) and aided target recognition (e.g., vehicle trackers). To mitigate these warping effects from turbulence, it is necessary to distinguish between actual in-scene motion and apparent motion caused by atmospheric turbulence. Previously, the current authors generated a synthetic video by injecting moving objects into a static scene and then applying a well-validated anisoplanatic atmospheric optical turbulence simulator. With known per-pixel truth of all moving objects, a per-pixel Gaussian mixture model (GMM) was developed as a baseline technique. In this paper, the baseline technique has been modified to improve performance while decreasing computational complexity. Additionally, the technique is extended to patches such that spatial correlations are captured, which results in further performance improvement

    Differential Zernike filter for phasing of segmented mirror and image processing

    Get PDF
    The major objective of this thesis is to study the differential Zernike filter and its applications in phasing segmented mirror and image processing. In terms of phasing, we provide both theoretical analysis and simulation for a differential Zernike filter based phasing technique, and find that the differential Zernike filter perform consistently better than its counterpart, traditional Zernike filter. We also combine the differential Zernike filter with a feedback loop, to represent a gradient-flow optimization dynamic system. This system is shown to be capable of separating (static) misalignment errors of segmented mirrors from (dynamical) atmospheric turbulence, and therefore compress the effects of atmospheric turbulence. Except for segmented mirror phasing, we also apply the Zernike feedback system in image processing. For the same system dynamics as well as in segment phasing, the Zernike filter feedback system is capable of compress the static noisy background, and makes the single particle tracking algorithm even working in case of very low signal-to-noise ratio. Finally, we apply an efficient multiple-particle tracking algorithm on a living cell image sequence. This algorithm is shown to be able to deal with higher particle density, while the single particle tracking methods are not working under this condition

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems

    Foreground segmentation in atmospheric turbulence degraded video sequences to aid in background stabilization

    Get PDF
    Abstract: Video sequences captured over a long range through the turbulent atmosphere contain some degree of atmospheric turbulence degradation (ATD). Stabilization of the geometric distortions present in video sequences containing ATD and containing objects undergoing real motion is a challenging task. This is due to the difficulty of discriminating what visible motion is real motion and what is caused by ATD warping. Due to this, most stabilization techniques applied to ATD sequences distort real motion in the sequence. In this study we propose a new method to classify foreground regions in ATD video sequences. This classification is used to stabilize the background of the scene while preserving objects undergoing real motion by compositing them back into the sequence. A hand annotated dataset of three ATD sequences is produced with which the performance of this approach can be quantitatively measured and compared against the current state-of-the-art
    corecore