227 research outputs found

    A Fast Image Super-Resolution Algorithm Using an Adaptive Wiener Filter

    Get PDF
    A computationally simple super-resolution algorithm using a type of adaptive Wiener filter is proposed. The algorithm produces an improved resolution image from a sequence of low-resolution (LR) video frames with overlapping field of view. The algorithm uses subpixel registration to position each LR pixel value on a common spatial grid that is referenced to the average position of the input frames. The positions of the LR pixels are not quantized to a finite grid as with some previous techniques. The output high-resolution (HR) pixels are obtained using a weighted sum of LR pixels in a local moving window. Using a statistical model, the weights for each HR pixel are designed to minimize the mean squared error and they depend on the relative positions of the surrounding LR pixels. Thus, these weights adapt spatially and temporally to changing distributions of LR pixels due to varying motion. Both a global and spatially varying statistical model are considered here. Since the weights adapt with distribution of LR pixels, it is quite robust and will not become unstable when an unfavorable distribution of LR pixels is observed. For translational motion, the algorithm has a low computational complexity and may be readily suitable for real-time and/or near real-time processing applications. With other motion models, the computational complexity goes up significantly. However, regardless of the motion model, the algorithm lends itself to parallel implementation. The efficacy of the proposed algorithm is demonstrated here in a number of experimental results using simulated and real video sequences. A computational analysis is also presented

    Scene-based nonuniformity correction with video sequences and registration

    Get PDF
    We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity

    Robust Super-resolution by Fusion of Interpolated Frames for Color and Grayscale Images

    Get PDF
    Multi-frame super-resolution (SR) processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts in imaging systems. A key factor in effective multi-frame SR is accurate subpixel inter-frame registration. Accurate registration is more difficult when frame-to-frame motion does not contain simple global translation and includes locally moving scene objects. SR processing is further complicated when the camera captures full color by using a Bayer color filter array (CFA). Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and CFA sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF) SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to improve any degrading camera effects. The proposed FIF approach has a lower computational complexity than many iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. Experiments include airborne grayscale imagery and Bayer CFA image sets with affine background motion plus local motion

    Robust Super-resolution by Fusion of Interpolated Frames for Color and Grayscale Images

    Get PDF
    Multi-frame super-resolution (SR) processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts in imaging systems. A key factor in effective multi-frame SR is accurate subpixel inter-frame registration. Accurate registration is more difficult when frame-to-frame motion does not contain simple global translation and includes locally moving scene objects. SR processing is further complicated when the camera captures full color by using a Bayer color filter array (CFA). Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and CFA sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF) SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to improve any degrading camera effects. The proposed FIF approach has a lower computational complexity than many iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. Experiments include airborne grayscale imagery and Bayer CFA image sets with affine background motion plus local motion

    Fast Super-Resolution Using an Adaptive Wiener Filter with Robustness to Local Motion

    Get PDF
    We present a new adaptive Wiener filter (AWF) super-resolution (SR) algorithm that employs a global background motion model but is also robust to limited local motion. The AWF relies on registration to populate a common high resolution (HR) grid with samples from several frames. A weighted sum of local samples is then used to perform nonuniform interpolation and image restoration simultaneously. To achieve accurate subpixel registration, we employ a global background motion model with relatively few parameters that can be estimated accurately. However, local motion may be present that includes moving objects, motion parallax, or other deviations from the background motion model. In our proposed robust approach, pixels from frames other than the reference that are inconsistent with the background motion model are detected and excluded from populating the HR grid. Here we propose and compare several local motion detection algorithms. We also propose a modified multiscale background registration method that incorporates pixel selection at each scale to minimize the impact of local motion. We demonstrate the efficacy of the new robust SR methods using several datasets, including airborne infrared data with moving vehicles and a ground resolution pattern for objective resolution analysis

    Rank Conditioned Rank Selection Filters for Signal Restoration

    Get PDF
    A class of nonlinear filters called rank conditioned rank selection (RCRS) filters is developed and analyzed in this paper. The RCRS filters are developed within the general framework of rank selection(RS) filters, which are filters constrained to output an order statistic from the observation set. Many previously proposed rank order based filters can be formulated as RS filters. The only difference between such filters is in the information used in deciding which order statistic to output. The information used by RCRS filters is the ranks of selected input samples, hence the name rank conditioned rank selection filters. The number of input sample ranks used is referred to as the order of the RCRS filter. The order can range from zero to the number of samples in the observation window, giving the filters valuable flexibility. Low-order filters can give good performance and are relatively simple to optimize and implement. If improved performance is demanded, the order can be increased but at the expense of filter simplicity. In this paper, many statistical and deterministic properties of the RCRS filters are presented. A procedure for optimizing over the class of RCRS filters is also presented. Finally, extensive computer simulation results that illustrate the performance of RCRS filters in comparison with other techniques in image restoration applications are presented

    Digital Image Processing

    Get PDF
    In recent years, digital images and digital image processing have become part of everyday life. This growth has been primarily fueled by advances in digital computers and the advent and growth of the Internet. Furthermore, commercially available digital cameras, scanners, and other equipment for acquiring, storing, and displaying digital imagery have become very inexpensive and increasingly powerful. An excellent treatment of digital images and digital image processing can be found in Ref. [1]. A digital image is simply a two-dimensional array of finite-precision numerical values called picture elements (or pixels). Thus a digital image is a spatially discrete (or discrete-space) signal. In visible grayscale images, for example, each pixel represents the intensity of a corresponding region in the scene. The grayscale values must be quantized into a finite precision format. Typical resolutions include 8 bit (256 gray levels), 12 bit (4096 gray levels), and 16 bit (65536 gray levels). Color visible images are most frequently represented by tristimulus values. These are the quantities of red, green, and blue light required, in the additive color system, to produce the desired color. Thus a so-called “RGB” color image can be thought of as a set of three “grayscale” images — the first representing the red component, the second the green, and the third the blue. Digital images can also be nonvisible in nature. This means that the physical quantity represented by the pixel values is something other than visible light intensity or color. These include radar cross-sections of an object, temperature profile (infrared imaging), X-ray images, gravitation field, etc. In general, any two-dimensional array information can be the basis for a digital image. As in the case of any digital data, the advantage of this representation is in the ability to manipulate the pixel values using a digital computer or digital hardware. This offers great power and flexibility. Furthermore, digital images can be stored and transmitted far more reliably than their analog counterparts. Error protection coding of digital imagery, for example, allows for virtually error-free transmission

    A MAP Estimator for Simultaneous Superresolution and Detector Nonunifomity Correct

    Get PDF
    During digital video acquisition, imagery may be degraded by a number of phenomena including undersampling, blur, and noise. Many systems, particularly those containing infrared focal plane array (FPA) sensors, are also subject to detector nonuniformity. Nonuniformity, or fixed pattern noise, results from nonuniform responsivity of the photodetectors that make up the FPA. Here we propose a maximuma posteriori (MAP) estimation framework for simultaneously addressing undersampling, linear blur, additive noise, and bias nonuniformity. In particular, we jointly estimate a superresolution (SR) image and detector bias nonuniformity parameters from a sequence of observed frames. This algorithm can be applied to video in a variety of ways including using amoving temporal window of frames to process successive groups of frames. By combining SR and nonuniformity correction (NUC) in this fashion, we demonstrate that superior results are possible compared with the more conventional approach of performing scene-based NUC followed by independent SR. The proposed MAP algorithm can be applied with or without SR, depending on the application and computational resources available. Even without SR, we believe that the proposed algorithm represents a novel and promising scene-based NUC technique. We present a number of experimental results to demonstrate the efficacy of the proposed algorithm. These include simulated imagery for quantitative analysis and real infrared video for qualitative analysis

    A Collaborative Adaptive Wiener Filter for Image Restoration Using a Spatial-Domain Multi-patch Correlation Model

    Get PDF
    We present a new patch-based image restoration algorithm using an adaptive Wiener filter (AWF) with a novel spatial-domain multi-patch correlation model. The new filter structure is referred to as a collaborative adaptive Wiener filter (CAWF). The CAWF employs a finite size moving window. At each position, the current observation window represents the reference patch. We identify the most similar patches in the image within a given search window about the reference patch. A single-stage weighted sum of all of the pixels in the similar patches is used to estimate the center pixel in the reference patch. The weights are based on a new multi-patch correlation model that takes into account each pixel’s spatial distance to the center of its corresponding patch, as well as the intensity vector distances among the similar patches. One key advantage of the CAWF approach, compared with many other patch-based algorithms, is that it can jointly handle blur and noise. Furthermore, it can also readily treat spatially varying signal and noise statistics. To the best of our knowledge, this is the first multi-patch algorithm to use a single spatial-domain weighted sum of all pixels within multiple similar patches to form its estimate and the first to use a spatial-domain multi-patch correlation model to determine the weights. The experimental results presented show that the proposed method delivers high performance in image restoration in a variety of scenarios

    Recursive Non-Local Means Filter for Video Denoising with Poisson-Gaussian Noise

    Get PDF
    In this paper, we describe a new recursive Non-Local means (RNLM) algorithm for video denoising that has been developed by the current authors. Furthermore, we extend this work by incorporating a Poisson-Gaussian noise model. Our new RNLM method provides a computationally efficient means for video denoising, and yields improved performance compared with the single frame NLM and BM3D benchmarks methods. Non-Local means (NLM) based methods of denoising have been applied successfully in various image and video sequence denoising applications. However, direct extension of this method from 2D to 3D for video processing can be computationally demanding. The RNLM approach takes advantage of recursion for computational savings, and spatio-temporal correlations for improved performance. In our approach, the first frame is processed with single frame NLM. Subsequent frames are estimated using a weighted combination of the current frame NLM, and the previous frame estimate. Block matching registration with the prior estimate is done for each current pixel estimate to maximize the temporal correlation. To address the Poisson-Gaussian noise model, we make use of the Anscombe transformation prior to filtering to stabilize the noise variance. Experimental results are presented that demonstrate the effectiveness of our proposed method. We show that the new method outperforms single frame NLM and BM3D
    • …
    corecore