996 research outputs found

    Extended object reconstruction in adaptive-optics imaging: the multiresolution approach

    Full text link
    We propose the application of multiresolution transforms, such as wavelets (WT) and curvelets (CT), to the reconstruction of images of extended objects that have been acquired with adaptive optics (AO) systems. Such multichannel approaches normally make use of probabilistic tools in order to distinguish significant structures from noise and reconstruction residuals. Furthermore, we aim to check the historical assumption that image-reconstruction algorithms using static PSFs are not suitable for AO imaging. We convolve an image of Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m Hale telescope at the Palomar Observatory and add both shot and readout noise. Subsequently, we apply different approaches to the blurred and noisy data in order to recover the original object. The approaches include multi-frame blind deconvolution (with the algorithm IDAC), myopic deconvolution with regularization (with MISTRAL) and wavelets- or curvelets-based static PSF deconvolution (AWMLE and ACMLE algorithms). We used the mean squared error (MSE) and the structural similarity index (SSIM) to compare the results. We discuss the strengths and weaknesses of the two metrics. We found that CT produces better results than WT, as measured in terms of MSE and SSIM. Multichannel deconvolution with a static PSF produces results which are generally better than the results obtained with the myopic/blind approaches (for the images we tested) thus showing that the ability of a method to suppress the noise and to track the underlying iterative process is just as critical as the capability of the myopic/blind approaches to update the PSF.Comment: In revision in Astronomy & Astrophysics. 19 pages, 13 figure

    Superresolution imaging: A survey of current techniques

    Full text link
    Cristóbal, G., Gil, E., Ơroubek, F., Flusser, J., Miravet, C., Rodríguez, F. B., “Superresolution imaging: A survey of current techniques”, Proceedings of SPIE - The International Society for Optical Engineering, 7074, 2008. Copyright 2008. Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy, and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images. Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution (SR). The stability of these methods depends on having more than one image of the same frame. Differences between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a variational method that minimizes a regularized energy function with respect to the high resolution image and blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described. Comparative experiments on real data illustrate the robustness and utilization of both methods.This research has been partially supported by the following grants: TEC2007-67025/TCM, TEC2006-28009-E, BFI-2003-07276, TIN-2004-04363-C03-03 by the Spanish Ministry of Science and Innovation, and by PROFIT projects FIT-070000-2003-475 and FIT-330100-2004-91. Also, this work has been partially supported by the Czech Ministry of Education under the project No. 1M0572 (Research Center DAR) and by the Czech Science Foundation under the project No. GACR 102/08/1593 and the CSIC-CAS bilateral project 2006CZ002

    AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using Denoising Diffusion Probabilistic Models

    Full text link
    Although many long-range imaging systems are designed to support extended vision applications, a natural obstacle to their operation is degradation due to atmospheric turbulence. Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion. In recent years, various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed in the literature which attempt to remove the distortion in the image. However, some of these methods are difficult to train and often fail to reconstruct facial features and produce unrealistic results especially in the case of high turbulence. Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images. In this paper, we propose the first DDPM-based solution for the problem of atmospheric turbulence mitigation. We also propose a fast sampling technique for reducing the inference times for conditional DDPMs. Extensive experiments are conducted on synthetic and real-world data to show the significance of our model. To facilitate further research, all codes and pretrained models are publically available at http://github.com/Nithin-GK/AT-DDPMComment: Accepted to IEEE WACV 202

    Digital Signal Processing

    Get PDF
    Contains summary of research and reports on sixteen research projects.U.S. Navy - Office of Naval Research (Contract N00014-75-C-0852)National Science Foundation FellowshipNATO FellowshipU.S. Navy - Office of Naval Research (Contract N00014-75-C-0951)National Science Foundation (Grant ECS79-15226)U.S. Navy - Office of Naval Research (Contract N00014-77-C-0257)Bell LaboratoriesNational Science Foundation (Grant ECS80-07102)Schlumberger-Doll Research Center FellowshipHertz Foundation FellowshipGovernment of Pakistan ScholarshipU.S. Navy - Office of Naval Research (Contract N00014-77-C-0196)U.S. Air Force (Contract F19628-81-C-0002)Hughes Aircraft Company Fellowshi

    Physics-Driven Turbulence Image Restoration with Stochastic Refinement

    Full text link
    Image distortion by atmospheric turbulence is a stochastic degradation, which is a critical problem in long-range optical imaging systems. A number of research has been conducted during the past decades, including model-based and emerging deep-learning solutions with the help of synthetic data. Although fast and physics-grounded simulation tools have been introduced to help the deep-learning models adapt to real-world turbulence conditions recently, the training of such models only relies on the synthetic data and ground truth pairs. This paper proposes the Physics-integrated Restoration Network (PiRN) to bring the physics-based simulator directly into the training process to help the network to disentangle the stochasticity from the degradation and the underlying image. Furthermore, to overcome the ``average effect" introduced by deterministic models and the domain gap between the synthetic and real-world degradation, we further introduce PiRN with Stochastic Refinement (PiRN-SR) to boost its perceptual quality. Overall, our PiRN and PiRN-SR improve the generalization to real-world unknown turbulence conditions and provide a state-of-the-art restoration in both pixel-wise accuracy and perceptual quality. Our codes are available at \url{https://github.com/VITA-Group/PiRN}.Comment: Accepted by ICCV 202

    Digital Signal Processing

    Get PDF
    Contains an introduction and reports on seventeen research projects.U.S. Navy - Office of Naval Research (Contract N00014-77-C-0266)Amoco Foundation FellowshipU.S. Navy - Office of Naval Research (Contract N00014-81-K-0742)National Science Foundation (Grant ECS80-07102)U.S. Army Research Office (Contract DAAG29-81-K-0073)Hughes Aircraft Company FellowshipAmerican Edwards Labs. GrantWhitaker Health Sciences FundPfeiffer Foundation GrantSchlumberger-Doll Research Center FellowshipGovernment of Pakistan ScholarshipU.S. Navy - Office of Naval Research (Contract N00014-77-C-0196)National Science Foundation (Grant ECS79-15226)Hertz Foundation Fellowshi

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin
    • 

    corecore