14 research outputs found
Hybrid solutions to instantaneous MIMO blind separation and decoding: narrowband, QAM and square cases
Future wireless communication systems are desired to support high data rates and high quality transmission when considering the growing multimedia applications. Increasing the channel throughput leads to the multiple input and multiple output and blind equalization techniques in recent years. Thereby blind MIMO equalization has attracted a great interest.Both system performance and computational complexities play important roles in real time communications. Reducing the computational load and providing accurate performances are the main challenges in present systems. In this thesis, a hybrid method which can provide an affordable complexity with good performance for Blind Equalization in large constellation MIMO systems is proposed first. Saving computational cost happens both in the signal sep- aration part and in signal detection part. First, based on Quadrature amplitude modulation signal characteristics, an efficient and simple nonlinear function for the Independent Compo- nent Analysis is introduced. Second, using the idea of the sphere decoding, we choose the soft information of channels in a sphere, and overcome the so- called curse of dimensionality of the Expectation Maximization (EM) algorithm and enhance the final results simultaneously. Mathematically, we demonstrate in the digital communication cases, the EM algorithm shows Newton -like convergence.Despite the widespread use of forward -error coding (FEC), most multiple input multiple output (MIMO) blind channel estimation techniques ignore its presence, and instead make the sim- plifying assumption that the transmitted symbols are uncoded. However, FEC induces code structure in the transmitted sequence that can be exploited to improve blind MIMO channel estimates. In final part of this work, we exploit the iterative channel estimation and decoding performance for blind MIMO equalization. Experiments show the improvements achievable by exploiting the existence of coding structures and that it can access the performance of a BCJR equalizer with perfect channel information in a reasonable SNR range. All results are confirmed experimentally for the example of blind equalization in block fading MIMO systems
Bit-Error-Rate-Minimizing Channel Shortening Using Post-FEQ Diversity Combining and a Genetic Algorithm
In advanced wireline or wireless communication systems, i.e., DSL, IEEE 802.11a/g, HIPERLAN/2, etc., a cyclic prefix which is proportional to the channel impulse response is needed to append a multicarrier modulation (MCM) frame for operating the MCM accurately. This prefix is used to combat inter symbol interference (ISI). In some cases, the channel impulse response can be longer than the cyclic prefix (CP). One of the most useful techniques to mitigate this problem is reuse of a Channel Shortening Equalizer (CSE) as a linear preprocessor before the MCM receiver in order to shorten the effective channel length. Channel shortening filter design is a widely examined topic in the literature. Most channel shortening equalizer proposals depend on perfect channel state information (CSI). However, this information may not be available in all situations. In cases where channel state information is not needed, blind adaptive equalization techniques are appropriate. In wireline communication systems (such as DMT), the CSE design is based on maximizing the bit rate, but in wireless systems (OFDM), there is a fixed bit loading algorithm, and the performance metric is Bit Error Rate (BER) minimization. In this work, a CSE is developed for multicarrier and single-carrier cyclic prefixed (SCCP) systems which attempts to minimize the BER. To minimize the BER, a Genetic Algorithm (GA), which is an optimization method based on the principles of natural selection and genetics, is used. If the CSI is shorter than the CP, the equalization can be done by a frequency domain equalizer (FEQ), which is a bank of complex scalars. However, in the literature the adaptive FEQ design has not been well examined. The second phase of this thesis focuses on different types of algorithms for adapting the FEQ and modifying the FEQ architecture to obtain a lower BER. Simulation results show that this modified architecture yields a 20 dB improvement in BER
Array processing based on time-frequency analysis and higher-order statistics
Ph.DDOCTOR OF PHILOSOPH
Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos
High quality digital images have become pervasive in modern scientific and everyday life —
in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However
there are always limits to the quality of these images due to uncertainty and imprecision in the
measurement systems. Modern signal processing methods offer the promise of overcoming
some of these problems by postprocessing
these blurred and noisy images. In this thesis,
novel methods using nonstationary statistical models are developed for the removal of blurs
from out of focus and other types of degraded photographic images.
The work tackles the fundamental problem blind image deconvolution (BID); its goal is
to restore a sharp image from a blurred observation when the blur itself is completely unknown.
This is a “doubly illposed”
problem — extreme lack of information must be countered
by strong prior constraints about sensible types of solution. In this work, the hierarchical
Bayesian methodology is used as a robust and versatile framework to impart the required prior
knowledge.
The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along
with techniques and models for its solution. Observation models are developed, with an
emphasis on photographic restoration, concluding with a discussion of how these are reduced
to the common linear spatially-invariant
(LSI) convolutional model. Classical methods for the
solution of illposed
problems are summarised to provide a foundation for the main theoretical
ideas that will be used under the Bayesian framework. This is followed by an indepth
review
and discussion of the various prior image and blur models appearing in the literature, and then
their applications to solving the problem with both Bayesian and nonBayesian
techniques.
The second part covers novel restoration methods, making use of the theory presented in Part I.
Firstly, two new nonstationary image models are presented. The first models local variance in
the image, and the second extends this with locally adaptive noncausal
autoregressive (AR)
texture estimation and local mean components. These models allow for recovery of image
details including edges and texture, whilst preserving smooth regions. Most existing methods
do not model the boundary conditions correctly for deblurring of natural photographs, and a
Chapter is devoted to exploring Bayesian solutions to this topic.
Due to the complexity of the models used and the problem itself, there are many challenges
which must be overcome for tractable inference. Using the new models, three different inference
strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori
(MMAP) method with deterministic optimisation; proceeding with the stochastic methods
of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution
using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective
way to deal with a variety of different types of unknown blurs. Along the way, details are given
of the numerical strategies developed to give accurate results and to accelerate performance.
Finally, the thesis demonstrates state of the art
results in blind restoration of synthetic and real
degraded images, such as recovering details in out of focus photographs