321 research outputs found
Fast Super-Resolution Using an Adaptive Wiener Filter with Robustness to Local Motion
We present a new adaptive Wiener filter (AWF) super-resolution (SR) algorithm that employs a global background motion model but is also robust to limited local motion. The AWF relies on registration to populate a common high resolution (HR) grid with samples from several frames. A weighted sum of local samples is then used to perform nonuniform interpolation and image restoration simultaneously. To achieve accurate subpixel registration, we employ a global background motion model with relatively few parameters that can be estimated accurately. However, local motion may be present that includes moving objects, motion parallax, or other deviations from the background motion model. In our proposed robust approach, pixels from frames other than the reference that are inconsistent with the background motion model are detected and excluded from populating the HR grid. Here we propose and compare several local motion detection algorithms. We also propose a modified multiscale background registration method that incorporates pixel selection at each scale to minimize the impact of local motion. We demonstrate the efficacy of the new robust SR methods using several datasets, including airborne infrared data with moving vehicles and a ground resolution pattern for objective resolution analysis
Real Time Turbulent Video Perfecting by Image Stabilization and Super-Resolution
Image and video quality in Long Range Observation Systems (LOROS) suffer from
atmospheric turbulence that causes small neighbourhoods in image frames to
chaotically move in different directions and substantially hampers visual
analysis of such image and video sequences. The paper presents a real-time
algorithm for perfecting turbulence degraded videos by means of stabilization
and resolution enhancement. The latter is achieved by exploiting the turbulent
motion. The algorithm involves generation of a reference frame and estimation,
for each incoming video frame, of a local image displacement map with respect
to the reference frame; segmentation of the displacement map into two classes:
stationary and moving objects and resolution enhancement of stationary objects,
while preserving real motion. Experiments with synthetic and real-life
sequences have shown that the enhanced videos, generated in real time, exhibit
substantially better resolution and complete stabilization for stationary
objects while retaining real motion.Comment: Submitted to The Seventh IASTED International Conference on
Visualization, Imaging, and Image Processing (VIIP 2007) August, 2007 Palma
de Mallorca, Spai
Investigation of a new method for improving image resolution for camera tracking applications
Camera based systems have been a preferred choice in many motion tracking applications due to the ease of installation and the ability to work in unprepared environments. The concept of these systems is based on extracting image information (colour and shape properties) to detect the object location. However, the resolution of the image and the camera field-of- view (FOV) are two main factors that can restrict the tracking applications for which these systems can be used. Resolution can be addressed partially by using higher resolution cameras but this may not always be possible or cost effective.
This research paper investigates a new method utilising averaging of offset images to improve the effective resolution using a standard camera. The initial results show that the minimum detectable position change of a tracked object could be improved by up to 4 times
Super Resolution of Wavelet-Encoded Images and Videos
In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution. Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands. Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution
Super-resolution Using Adaptive Wiener Filters
The spatial sampling rate of an imaging system is determined by the spacing of the detectors in the focal plane array (FPA). The spatial frequencies present in the image on the focal plane are band-limited by the optics. This is due to diffraction through a finite aperture. To guarantee that there will be no aliasing during image acquisiton, the Nyquist criterion dictates that the sampling rate must be greater than twice the cut-off frequency of the optics. However, optical designs involve a number of trade-offs and typical imaging systems are designed with some level of aliasing. We will refer to such systems as detector limited, as opposed to optically limited. Furthermore, with or without aliasing, imaging systems invariably suffer from diffraction blur, optical abberations, and noise. Multiframe super-resolution (SR) processing has proven to be successful in reducing aliasing and enhancing the resolution of images from detector limited imaging systems
Multiframe Super-Resolution of Color Image Sequences Using a Global Motion Model
The development of efficient software tools capable of super- resolving multi-spectral image sequences on-the-fly is an important step toward the production of imaging systems capable of acquiring vital imagery of hostile environments at an affordable price. A number of image processing tools already available for use in target recognition and identification rely on the availability of high-resolution imagery which cannot be safely acquired at a reasonable price. This thesis investigates the use of multiframe super-resolution as a tool to increase the spatial resolution of image sequences acquired with sensors commonly used in consumer video cameras. Multiframe super-resolution is the branch of imaging science which tries to restore high-resolution estimates of a scene utilizing a sequence of under-sampled images of that scene. Although a number of algorithms have already been developed to deal with this problem, they have unfortunately not been extended to deal with multi-spectral images acquired from moving imaging platforms. This thesis performs such extension for one of the most successful super-resolution algorithm and demonstrates that it can be used to improve the performance of common multi-spectral imaging systems utilizing Color Filter Arrays to acquire spectral data
- …