29 research outputs found

    Superresolution Enhancement of Hyperspectral CHRIS/Proba Images With a Thin-Plate Spline Nonrigid Transform Model

    Get PDF
    Given the hyperspectral-oriented waveband configuration of multiangular CHRIS/Proba imagery, the scope of its application could widen if the present 18-m resolution would be improved. The multiangular images of CHRIS could be used as input for superresolution (SR) image reconstruction. A critical procedure in SR is an accurate registration of the low-resolution images. Conventional methods based on affine transformation may not be effective given the local geometric distortion in high off-nadir angular images. This paper examines the use of a non-rigid transform to improve the result of a nonuniform interpolation and deconvolution SR method. A scale-invariant feature transform is used to collect control points (CPs). To ensure the quality of CPs, a rigorous screening procedure is designed: 1) an ambiguity test; 2) the m-estimator sample consensus method; and 3) an iterative method using statistical characteristics of the distribution of random errors. A thin-plate spline (TPS) nonrigid transform is then used for the registration. The proposed registration method is examined with a Delaunay triangulation-based nonuniform interpolation and reconstruction SR method. Our results show that the TPS nonrigid transform allows accurate registration of angular images. SR results obtained from simulated LR images are evaluated using three quantitative measures, namely, relative mean-square error, structural similarity, and edge stability. Compared to the SR methods that use an affine transform, our proposed method performs better with all three evaluation measures. With a higher level of spatial detail, SR-enhanced CHRIS images might be more effective than the original data in various applications.JRC.H.7-Climate Risk Managemen

    Super Resolution of Wavelet-Encoded Images and Videos

    Get PDF
    In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution. Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands. Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images

    An Efficient Algorithm for Video Super-Resolution Based On a Sequential Model

    Get PDF
    In this work, we propose a novel procedure for video super-resolution, that is the recovery of a sequence of high-resolution images from its low-resolution counterpart. Our approach is based on a "sequential" model (i.e., each high-resolution frame is supposed to be a displaced version of the preceding one) and considers the use of sparsity-enforcing priors. Both the recovery of the high-resolution images and the motion fields relating them is tackled. This leads to a large-dimensional, non-convex and non-smooth problem. We propose an algorithmic framework to address the latter. Our approach relies on fast gradient evaluation methods and modern optimization techniques for non-differentiable/non-convex problems. Unlike some other previous works, we show that there exists a provably-convergent method with a complexity linear in the problem dimensions. We assess the proposed optimization method on {several video benchmarks and emphasize its good performance with respect to the state of the art.}Comment: 37 pages, SIAM Journal on Imaging Sciences, 201

    Inverse problems in astronomical and general imaging

    Get PDF
    The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object

    Super-resolution:A comprehensive survey

    Get PDF

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    Super-Resolution of Unmanned Airborne Vehicle Images with Maximum Fidelity Stochastic Restoration

    Get PDF
    Super-resolution (SR) refers to reconstructing a single high resolution (HR) image from a set of subsampled, blurred and noisy low resolution (LR) images. One may, then, envision a scenario where a set of LR images is acquired with sensors on a moving platform like unmanned airborne vehicles (UAV). Due to the wind, the UAV may encounter altitude change or rotational effects which can distort the acquired as well as the processed images. Also, the visual quality of the SR image is affected by image acquisition degradations, the available number of the LR images and their relative positions. This dissertation seeks to develop a novel fast stochastic algorithm to reconstruct a single SR image from UAV-captured images in two steps. First, the UAV LR images are aligned using a new hybrid registration algorithm within subpixel accuracy. In the second step, the proposed approach develops a new fast stochastic minimum square constrained Wiener restoration filter for SR reconstruction and restoration using a fully detailed continuous-discrete-continuous (CDC) model. A new parameter that accounts for LR images registration and fusion errors is added to the SR CDC model in addition to a multi-response restoration and reconstruction. Finally, to assess the visual quality of the resultant images, two figures of merit are introduced: information rate and maximum realizable fidelity. Experimental results show that quantitative assessment using the proposed figures coincided with the visual qualitative assessment. We evaluated our filter against other SR techniques and its results were found to be competitive in terms of speed and visual quality

    Video modeling via implicit motion representations

    Get PDF
    Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems

    Novel Methods in Computational Imaging with Applications in Remote Sensing

    Get PDF
    This dissertation is devoted to novel computational imaging methods with applications in remote sensing. Computational imaging methods are applied to three distinct applications including imaging and detection of buried explosive hazards utilizing array radar, high resolution imaging of satellites in geosynchronous orbit utilizing optical hypertelescope arrays, and characterization of atmospheric turbulence through multi-frame blind deconvolution utilizing conventional optical digital sensors. The first application considered utilizes a radar array employed as a forward looking ground penetrating radar system with applications in explosive hazard detection. A penalized least squares technique with sparsity-inducing regularization is applied to produce imagery, which is consistent with the expectation that objects are sparsely populated but extended with respect to the pixel grid. Additionally, a series of pre-processing steps is demonstrated which result in a greatly reduced data size and computational cost. Demonstrations of the approach are provided using experimental data and results are given in terms of signal to background ratio, image resolution, and relative computation time. The second application involves a sparse-aperture telescope array configured as a hypertelescope with applications in long range imaging. The penalized least squares technique with sparsity-inducing regularization is adapted and applied to this very different imaging modality. A comprehensive study of the algorithm tuning parameters is performed and performance is characterized using the Structure Similarity Metric (SSIM) to maximize image quality. Simulated measurements are used to show that imaging performance achieved using the pro- posed algorithm compares favorably in comparison to conventional Richardson-Lucy deconvolution. The third application involves a multi-frame collection from a conventional digital sensor with the primary objective of characterizing the atmospheric turbulence in the medium of propagation. In this application a joint estimate of the image is obtained along with the Zernike coefficients associated with the atmospheric PSF at each frame, and the Fried parameter r0 of the atmosphere. A pair of constraints are applied to a penalized least squares objective function to enforce the theoretical statistics of the set of PSF estimates as a function of r0. Results of the approach are shown with both simulated and experimental data and demonstrate excellent agreement between the estimated r0 values and the known or measured r0 values respectively

    A Computer Vision Story on Video Sequences::From Face Detection to Face Super- Resolution using Face Quality Assessment

    Get PDF
    corecore