17 research outputs found
Wavelet-based image and video super-resolution reconstruction.
Super-resolution reconstruction process offers the solution to overcome the high-cost and inherent resolution limitations of current imaging systems. The wavelet transform is a powerful tool for super-resolution reconstruction. This research provides a detailed study of the wavelet-based super-resolution reconstruction process, and wavelet-based resolution enhancement process (with which it is closely associated). It was addressed to handle an explicit need for a robust wavelet-based method that guarantees efficient utilisation of the SR reconstruction problem in the wavelet-domain, which will lead to a consistent solution of this problem and improved performance.
This research proposes a novel performance assessment approach to improve the performance of the existing wavelet-based image resolution enhancement techniques. The novel approach is based on identifying the factors that effectively influence on the performance of these techniques, and designing a novel optimal factor analysis (OFA) algorithm. A new wavelet-based image resolution enhancement method, based on discrete wavelet transform and new-edge directed interpolation (DWT-NEDI), and an adaptive thresholding process, has been developed. The DWT-NEDI algorithm aims to correct the geometric errors and remove the noise for degraded satellite images. A robust wavelet-based video super-resolution technique, based on global motion is developed by combining the DWT-NEDI method, with super-resolution reconstruction methods, in order to increase the spatial-resolution and remove the noise and aliasing artefacts. A new video super-resolution framework is designed using an adaptive local motion decomposition and wavelet transform reconstruction (ALMD-WTR). This is to address the challenge of the super-resolution problem for the real-world video sequences containing complex local motions.
The results show that OFA approach improves the performance of the selected wavelet-based methods. The DWT-NEDI algorithm outperforms the state-of-the art wavelet-based algorithms. The global motion-based algorithm has the best performance over the super-resolution techniques, namely Keren and structure-adaptive normalised convolution methods. ALMD-WTR framework surpass the state-of-the-art wavelet-based algorithm, namely local motion-based video super-resolution.PhD in Manufacturin
Super-Resolution of Unmanned Airborne Vehicle Images with Maximum Fidelity Stochastic Restoration
Super-resolution (SR) refers to reconstructing a single high resolution (HR) image from a set of subsampled, blurred and noisy low resolution (LR) images. One may, then, envision a scenario where a set of LR images is acquired with sensors on a moving platform like unmanned airborne vehicles (UAV). Due to the wind, the UAV may encounter altitude change or rotational effects which can distort the acquired as well as the processed images. Also, the visual quality of the SR image is affected by image acquisition degradations, the available number of the LR images and their relative positions. This dissertation seeks to develop a novel fast stochastic algorithm to reconstruct a single SR image from UAV-captured images in two steps. First, the UAV LR images are aligned using a new hybrid registration algorithm within subpixel accuracy. In the second step, the proposed approach develops a new fast stochastic minimum square constrained Wiener restoration filter for SR reconstruction and restoration using a fully detailed continuous-discrete-continuous (CDC) model. A new parameter that accounts for LR images registration and fusion errors is added to the SR CDC model in addition to a multi-response restoration and reconstruction. Finally, to assess the visual quality of the resultant images, two figures of merit are introduced: information rate and maximum realizable fidelity. Experimental results show that quantitative assessment using the proposed figures coincided with the visual qualitative assessment. We evaluated our filter against other SR techniques and its results were found to be competitive in terms of speed and visual quality
Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data
This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data.
Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches
Least-Squares Wavelet Analysis and Its Applications in Geodesy and Geophysics
The Least-Squares Spectral Analysis (LSSA) is a robust method of analyzing unequally spaced and non-stationary data/time series. Although this method takes into account the correlation among the sinusoidal basis functions of irregularly spaced series, its spectrum still shows spectral leakage: power/energy leaks from one spectral peak into another. An iterative method called AntiLeakage Least-Squares Spectral Analysis (ALLSSA) is developed to attenuate the spectral leakages in the spectrum and consequently is used to regularize data series. In this study, the ALLSSA is applied to regularize and attenuate random noise in seismic data down to a certain desired level. The ALLSSA is subsequently extended to multichannel, heterogeneous and coarsely sampled seismic and related gradient measurements intended for geophysical exploration applications that require regularized (equally spaced) data free from aliasing effects.
A new and robust method of analyzing unequally spaced and non-stationary time/data series is rigorously developed. This method, namely, the Least-Squares Wavelet Analysis (LSWA), is a natural extension of the LSSA that decomposes a time series into the time-frequency domain and obtains its spectrogram. It is shown through many synthetic and experimental time/data series that the LSWA supersedes all state-of-the-art spectral analyses methods currently available, without making any assumptions about or preprocessing (editing) the time series, or even applying any empirical methods that aim to adapt a time series to the analysis method. The LSWA can analyze any non-stationary and unequally spaced time series with components of low or high amplitude and frequency variability over time, including datum shifts (offsets), trends, and constituents of known forms, and by taking into account the covariance matrix associated with the time series. The stochastic confidence level surface for the spectrogram is rigorously derived that identifies statistically significant peaks in the spectrogram at a certain confidence level;
this supersedes the empirical cone of influence used in the most popular continuous wavelet transform.
All current state-of-the-art cross-wavelet transforms and wavelet coherence analyses methods impose many stringent constraints on the properties of the time series under investigation, requiring, more often than not, preprocessing of the raw measurements that may distort their content. These methods cannot generally be used to analyze unequally spaced and non-stationary time series or even two equally spaced time series of different sampling rates, with trends and/or datum shifts, and with associated covariance matrices. To overcome the stringent requirements of these methods, a new method is developed, namely, the Least-Squares Cross-Wavelet Analysis (LSCWA), along with its statistical distribution that requires no assumptions on the series under investigation. Numerous synthetic and geoscience examples establish the LSCWA as the method of methods for rigorous coherence analysis of any experimental series
Vision-based techniques for gait recognition
Global security concerns have raised a proliferation of video surveillance
devices. Intelligent surveillance systems seek to discover possible threats
automatically and raise alerts. Being able to identify the surveyed object can
help determine its threat level. The current generation of devices provide
digital video data to be analysed for time varying features to assist in the
identification process. Commonly, people queue up to access a facility and
approach a video camera in full frontal view. In this environment, a variety of
biometrics are available - for example, gait which includes temporal features
like stride period. Gait can be measured unobtrusively at a distance. The video
data will also include face features, which are short-range biometrics. In this
way, one can combine biometrics naturally using one set of data. In this paper
we survey current techniques of gait recognition and modelling with the
environment in which the research was conducted. We also discuss in detail the
issues arising from deriving gait data, such as perspective and occlusion
effects, together with the associated computer vision challenges of reliable
tracking of human movement. Then, after highlighting these issues and
challenges related to gait processing, we proceed to discuss the frameworks
combining gait with other biometrics. We then provide motivations for a novel
paradigm in biometrics-based human recognition, i.e. the use of the
fronto-normal view of gait as a far-range biometrics combined with biometrics
operating at a near distance
Estimation and Calibration Algorithms for Distributed Sampling Systems
Thesis Supervisor: Gregory W. Wornell
Title: Professor of Electrical Engineering and Computer ScienceTraditionally, the sampling of a signal is performed using a single component such as an
analog-to-digital converter. However, many new technologies are motivating the use of
multiple sampling components to capture a signal. In some cases such as sensor networks,
multiple components are naturally found in the physical layout; while in other cases like
time-interleaved analog-to-digital converters, additional components are added to increase
the sampling rate. Although distributing the sampling load across multiple channels can
provide large benefits in terms of speed, power, and resolution, a variety mismatch errors
arise that require calibration in order to prevent a degradation in system performance.
In this thesis, we develop low-complexity, blind algorithms for the calibration of distributed
sampling systems. In particular, we focus on recovery from timing skews that
cause deviations from uniform timing. Methods for bandlimited input reconstruction from
nonuniform recurrent samples are presented for both the small-mismatch and the low-SNR
domains. Alternate iterative reconstruction methods are developed to give insight into the
geometry of the problem.
From these reconstruction methods, we develop time-skew estimation algorithms that
have high performance and low complexity even for large numbers of components. We also
extend these algorithms to compensate for gain mismatch between sampling components.
To understand the feasibility of implementation, analysis is also presented for a sequential
implementation of the estimation algorithm.
In distributed sampling systems, the minimum input reconstruction error is dependent
upon the number of sampling components as well as the sample times of the components. We
develop bounds on the expected reconstruction error when the time-skews are distributed
uniformly. Performance is compared to systems where input measurements are made via
projections onto random bases, an alternative to the sinc basis of time-domain sampling.
From these results, we provide a framework on which to compare the effectiveness of any
calibration algorithm.
Finally, we address the topic of extreme oversampling, which pertains to systems with
large amounts of oversampling due to redundant sampling components. Calibration algorithms
are developed for ordering the components and for estimating the input from ordered
components. The algorithms exploit the extra samples in the system to increase estimation
performance and decrease computational complexity
Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos
High quality digital images have become pervasive in modern scientific and everyday life —
in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However
there are always limits to the quality of these images due to uncertainty and imprecision in the
measurement systems. Modern signal processing methods offer the promise of overcoming
some of these problems by postprocessing
these blurred and noisy images. In this thesis,
novel methods using nonstationary statistical models are developed for the removal of blurs
from out of focus and other types of degraded photographic images.
The work tackles the fundamental problem blind image deconvolution (BID); its goal is
to restore a sharp image from a blurred observation when the blur itself is completely unknown.
This is a “doubly illposed”
problem — extreme lack of information must be countered
by strong prior constraints about sensible types of solution. In this work, the hierarchical
Bayesian methodology is used as a robust and versatile framework to impart the required prior
knowledge.
The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along
with techniques and models for its solution. Observation models are developed, with an
emphasis on photographic restoration, concluding with a discussion of how these are reduced
to the common linear spatially-invariant
(LSI) convolutional model. Classical methods for the
solution of illposed
problems are summarised to provide a foundation for the main theoretical
ideas that will be used under the Bayesian framework. This is followed by an indepth
review
and discussion of the various prior image and blur models appearing in the literature, and then
their applications to solving the problem with both Bayesian and nonBayesian
techniques.
The second part covers novel restoration methods, making use of the theory presented in Part I.
Firstly, two new nonstationary image models are presented. The first models local variance in
the image, and the second extends this with locally adaptive noncausal
autoregressive (AR)
texture estimation and local mean components. These models allow for recovery of image
details including edges and texture, whilst preserving smooth regions. Most existing methods
do not model the boundary conditions correctly for deblurring of natural photographs, and a
Chapter is devoted to exploring Bayesian solutions to this topic.
Due to the complexity of the models used and the problem itself, there are many challenges
which must be overcome for tractable inference. Using the new models, three different inference
strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori
(MMAP) method with deterministic optimisation; proceeding with the stochastic methods
of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution
using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective
way to deal with a variety of different types of unknown blurs. Along the way, details are given
of the numerical strategies developed to give accurate results and to accelerate performance.
Finally, the thesis demonstrates state of the art
results in blind restoration of synthetic and real
degraded images, such as recovering details in out of focus photographs