30 research outputs found
A Universal Scheme for WynerâZiv Coding of Discrete Sources
We consider the WynerâZiv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by LempelâZiv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes
Stateâofâtheâart report on nonlinear representation of sources and channels
This report consists of two complementary parts, related to the modeling of two important sources of nonlinearities in a communications system. In the first part, an overview of important past work related to the estimation, compression and processing of sparse data through the use of nonlinear models is provided. In the second part, the current state of the art on the representation of wireless channels in the presence of nonlinearities is summarized. In addition to the characteristics of the nonlinear wireless fading channel, some information is also provided on recent approaches to the sparse representation of such channels
The design of finite-state machines for quantization using simulated annealing
Ankara : Department of Electrical and Electronics Engineering and Institute of Engineering and Sciences, Bilkent Univ., 1993.Thesis (Master's) -- Bilkent University, 1993.Includes bibliographical references leaves 121-125In this thesis, the combinatorial optimization algorithm known as simulated annealing
(SA) is applied to the solution of the next-state map design problem of
data compression systems based on finite-state machine decoders. These data
compression systems which include finite-state vector ciuantization (FSVQ),
trellis waveform coding (TWC), predictive trellis waveform coding (PTWC),
and trellis coded quantization (TCQ) are studied in depth. Incorporating generalized
Lloyd algorithm for the optimization of output map to SA, a finite-state
machine decoder design algorithm for the joint optimization of output map
and next-state map is constructed. Simulation results on several discrete-time
sources for FSVQ, TWC and PTWC show that decoders with higher performance
are obtained by the SA-I-CLA algorithm, when compared to other
related work in the literature. In TCQ, simulation results are obtained for
sources with memory and new observations are made.KuruoÄlu, Ercan EnginM.S
Transmission strategies for broadband wireless systems with MMSE turbo equalization
This monograph details efficient transmission strategies for single-carrier wireless broadband communication systems employing iterative (turbo) equalization. In particular, the first part focuses on the design and analysis of low complexity and robust MMSE-based turbo equalizers operating in the frequency domain. Accordingly, several novel receiver schemes are presented which improve the convergence properties and error performance over the existing turbo equalizers. The second part discusses concepts and algorithms that aim to increase the power and spectral efficiency of the communication system by efficiently exploiting the available resources at the transmitter side based upon the channel conditions. The challenging issue encountered in this context is how the transmission rate and power can be optimized, while a specific convergence constraint of the turbo equalizer is guaranteed.Die vorliegende Arbeit beschÀftigt sich mit dem Entwurf und der Analyse von
effizienten Ăbertragungs-konzepten fĂŒr drahtlose, breitbandige
EintrÀger-Kommunikationssysteme mit iterativer (Turbo-) Entzerrung und
Kanaldekodierung. Dies beinhaltet einerseits die Entwicklung von
empfÀngerseitigen Frequenzbereichs-entzerrern mit geringer KomplexitÀt
basierend auf dem Prinzip der Soft Interference Cancellation Minimum-Mean
Squared-Error (SC-MMSE) Filterung und andererseits den Entwurf von
senderseitigen Algorithmen, die durch Ausnutzung von
Kanalzustandsinformationen die Bandbreiten- und Leistungseffizienz in Ein-
und Mehrnutzersystemen mit Mehrfachantennen (sog. Multiple-Input
Multiple-Output (MIMO)) verbessern.
Im ersten Teil dieser Arbeit wird ein allgemeiner Ansatz fĂŒr Verfahren zur
Turbo-Entzerrung nach dem Prinzip der linearen MMSE-SchÀtzung, der
nichtlinearen MMSE-SchÀtzung sowie der kombinierten MMSE- und
Maximum-a-Posteriori (MAP)-SchÀtzung vorgestellt. In diesem Zusammenhang
werden zwei neue EmpfÀngerkonzepte, die eine Steigerung der
LeistungsfÀhigkeit und Verbesserung der Konvergenz in Bezug auf
existierende SC-MMSE Turbo-Entzerrer in verschiedenen Kanalumgebungen
erzielen, eingefĂŒhrt. Der erste EmpfĂ€nger - PDA SC-MMSE - stellt eine
Kombination aus dem Probabilistic-Data-Association (PDA) Ansatz und dem
bekannten SC-MMSE Entzerrer dar. Im Gegensatz zum SC-MMSE nutzt der PDA
SC-MMSE eine interne EntscheidungsrĂŒckfĂŒhrung, so dass zur UnterdrĂŒckung
von Interferenzen neben den a priori Informationen der Kanaldekodierung
auch weiche Entscheidungen der vorherigen Detektions-schritte
berĂŒcksichtigt werden. Durch die zusĂ€tzlich interne
EntscheidungsrĂŒckfĂŒhrung erzielt der PDA SC-MMSE einen wesentlichen Gewinn
an Performance in rĂ€umlich unkorrelierten MIMO-KanĂ€len gegenĂŒber dem
SC-MMSE, ohne dabei die KomplexitÀt des Entzerrers wesentlich zu erhöhen.
Der zweite EmpfĂ€nger - hybrid SC-MMSE - bildet eine VerknĂŒpfung von
gruppenbasierter SC-MMSE Frequenzbereichsfilterung und MAP-Detektion.
Dieser EmpfÀnger besitzt eine skalierbare BerechnungskomplexitÀt und weist
eine hohe Robustheit gegenĂŒber rĂ€umlichen Korrelationen in MIMO-KanĂ€len
auf. Die numerischen Ergebnisse von Simulationen basierend auf Messungen
mit einem Channel-Sounder in MehrnutzerkanÀlen mit starken rÀumlichen
Korrelationen zeigen eindrucksvoll die Ăberlegenheit des hybriden
SC-MMSE-Ansatzes gegenĂŒber dem konventionellen SC-MMSE-basiertem EmpfĂ€nger.
Im zweiten Teil wird der Einfluss von System- und Kanalmodellparametern auf
die Konvergenzeigenschaften der vorgestellten iterativen EmpfÀnger mit
Hilfe sogenannter Korrelationsdiagramme untersucht. Durch semi-analytische
Berechnungen der Entzerrer- und Kanaldecoder-Korrelationsfunktionen wird
eine einfache Berechnungsvorschrift zur Vorhersage der
Bitfehlerwahrscheinlichkeit von SC-MMSE und PDA SC-MMSE Turbo Entzerrern
fĂŒr MIMO-FadingkanĂ€le entwickelt. Des Weiteren werden zwei Fehlerschranken
fĂŒr die Ausfallwahrscheinlichkeit der EmpfĂ€nger vorgestellt. Die
semi-analytische Methode und die abgeleiteten Fehlerschranken ermöglichen
eine aufwandsgeringe AbschÀtzung sowie Optimierung der LeistungsfÀhigkeit
des iterativen Systems.
Im dritten und abschlieĂenden Teil werden Strategien zur Raten- und
Leistungszuweisung in Kommunikationssystemen mit konventionellen iterativen
SC-MMSE EmpfÀngern untersucht. ZunÀchst wird das Problem der Maximierung
der instantanen Summendatenrate unter der BerĂŒcksichtigung der Konvergenz
des iterativen EmpfĂ€ngers fĂŒr einen Zweinutzerkanal mit fester
Leistungsallokation betrachtet. Mit Hilfe des FlÀchentheorems von
Extrinsic-Information-Transfer (EXIT)-Funktionen wird eine obere Schranke
fĂŒr die erreichbare Ratenregion hergeleitet. Auf Grundlage dieser Schranke
wird ein einfacher Algorithmus entwickelt, der fĂŒr jeden Nutzer aus einer
Menge von vorgegebenen Kanalcodes mit verschiedenen Codierraten denjenigen
auswÀhlt, der den instantanen Datendurchsatz des Mehrnutzersystems
verbessert. Neben der instantanen Ratenzuweisung wird auch ein
ausfallbasierter Ansatz zur Ratenzuweisung entwickelt. Hierbei erfolgt die
Auswahl der Kanalcodes fĂŒr die Nutzer unter BerĂŒcksichtigung der Einhaltung
einer bestimmten Ausfallwahrscheinlichkeit (outage probability) des
iterativen EmpfĂ€ngers. Des Weiteren wird ein neues Entwurfskriterium fĂŒr
irregulÀre Faltungscodes hergeleitet, das die Ausfallwahrscheinlichkeit von
Turbo SC-MMSE Systemen verringert und somit die ZuverlÀssigkeit der
DatenĂŒbertragung erhöht. Eine Reihe von Simulationsergebnissen von
KapazitÀts- und Durchsatzberechnungen werden vorgestellt, die die
Wirksamkeit der vorgeschlagenen Algorithmen und Optimierungsverfahren in
MehrnutzerkanĂ€len belegen. AbschlieĂend werden auĂerdem verschiedene
MaĂnahmen zur Minimierung der Sendeleistung in Einnutzersystemen mit
senderseitiger Singular-Value-Decomposition (SVD)-basierter Vorcodierung
untersucht. Es wird gezeigt, dass eine Methode, welche die Leistungspegel
des Senders hinsichtlich der Bitfehlerrate des iterativen EmpfÀngers
optimiert, den konventionellen Verfahren zur Leistungszuweisung ĂŒberlegen
ist
Self-concatenated coding for wireless communication systems
In this thesis, we have explored self-concatenated coding schemes that are designed for transmission over Additive White Gaussian Noise (AWGN) and uncorrelated Rayleigh fading channels. We designed both the symbol-based Self-ConcatenatedCodes considered using Trellis Coded Modulation (SECTCM) and bit-based Self- Concatenated Convolutional Codes (SECCC) using a Recursive Systematic Convolutional (RSC) encoder as constituent codes, respectively. The design of these codes was carried out with the aid of Extrinsic Information Transfer (EXIT) charts. The EXIT chart based design has been found an efficient tool in finding the decoding convergence threshold of the constituent codes. Additionally, in order to recover the information loss imposed by employing binary rather than non-binary schemes, a soft decision demapper was introduced in order to exchange extrinsic information withthe SECCC decoder. To analyse this information exchange 3D-EXIT chart analysis was invoked for visualizing the extrinsic information exchange between the proposed Iteratively Decoding aided SECCC and soft-decision demapper (SECCC-ID). Some of the proposed SECTCM, SECCC and SECCC-ID schemes perform within about 1 dB from the AWGN and Rayleigh fading channelsâ capacity. A union bound analysis of SECCC codes was carried out to find the corresponding Bit Error Ratio (BER) floors. The union bound of SECCCs was derived for communications over both AWGN and uncorrelated Rayleigh fading channels, based on a novel interleaver concept.Application of SECCCs in both UltraWideBand (UWB) and state-of-the-art video-telephone schemes demonstrated its practical benefits.In order to further exploit the benefits of the low complexity design offered by SECCCs we explored their application in a distributed coding scheme designed for cooperative communications, where iterative detection is employed by exchanging extrinsic information between the decoders of SECCC and RSC at the destination. In the first transmission period of cooperation, the relay receives the potentially erroneous data and attempts to recover the information. The recovered information is then re-encoded at the relay using an RSC encoder. In the second transmission period this information is then retransmitted to the destination. The resultant symbols transmitted from the source and relay nodes can be viewed as the coded symbols of a three-component parallel-concatenated encoder. At the destination a Distributed Binary Self-Concatenated Coding scheme using Iterative Decoding (DSECCC-ID) was employed, where the two decoders (SECCC and RSC) exchange their extrinsic information. It was shown that the DSECCC-ID is a low-complexity scheme, yet capable of approaching the Discrete-input Continuous-output Memoryless Channelsâs (DCMC) capacity.Finally, we considered coding schemes designed for two nodes communicating with each other with the aid of a relay node, where the relay receives information from the two nodes in the first transmission period. At the relay node we combine a powerful Superposition Coding (SPC) scheme with SECCC. It is assumed that decoding errors may be encountered at the relay node. The relay node then broadcasts this information in the second transmission period after re-encoding it, again, using a SECCC encoder. At the destination, the amalgamated block of Successive Interference Cancellation (SIC) scheme combined with SECCC then detects and decodes the signal either with or without the aid of a priori information. Our simulation results demonstrate that the proposed scheme is capable of reliably operating at a low BER for transmission over both AWGN and uncorrelated Rayleigh fading channels. We compare the proposed schemeâs performance to a direct transmission link between the two sources having the same throughput
Signal processing techniques for mobile multimedia systems
Recent trends in wireless communication systems show a significant demand for the delivery of multimedia services and applications over mobile networks - mobile multimedia - like video telephony, multimedia messaging, mobile gaming, interactive and streaming video, etc. However, despite the ongoing development of key communication technologies that support these applications, the communication resources and bandwidth available to wireless/mobile radio systems are often severely limited. It is well known, that these bottlenecks are inherently due to the processing capabilities of mobile transmission systems, and the time-varying nature of wireless channel conditions and propagation environments. Therefore, new ways of processing and transmitting multimedia data over mobile radio channels have become essential which is the principal focus of this thesis. In this work, the performance and suitability of various signal processing techniques and transmission strategies in the application of multimedia data over wireless/mobile radio links are investigated. The proposed transmission systems for multimedia communication employ different data encoding schemes which include source coding in the wavelet domain, transmit diversity coding (space-time coding), and adaptive antenna beamforming (eigenbeamforming). By integrating these techniques into a robust communication system, the quality (SNR, etc) of multimedia signals received on mobile devices is maximised while mitigating the fast fading and multi-path effects of mobile channels. To support the transmission of high data-rate multimedia applications, a well known multi-carrier transmission technology known as Orthogonal Frequency Division Multiplexing (OFDM) has been implemented. As shown in this study, this results in significant performance gains when combined with other signal-processing techniques such as spa ce-time block coding (STBC). To optimise signal transmission, a novel unequal adaptive modulation scheme for the communication of multimedia data over MIMO-OFDM systems has been proposed. In this system, discrete wavelet transform/subband coding is used to compress data into their respective low-frequency and high-frequency components. Unlike traditional methods, however, data representing the low-frequency data are processed and modulated separately as they are more sensitive to the distortion effects of mobile radio channels. To make use of a desirable subchannel state, such that the quality (SNR) of the multimedia data recovered at the receiver is optimized, we employ a lookup matrix-adaptive bit and power allocation (LM-ABPA) algorithm. Apart from improving the spectral efficiency of OFDM, the modified LM-ABPA scheme, sorts and allocates subcarriers with the highest SNR to low-frequency data and the remaining to the least important data. To maintain a target system SNR, the LM-ABPA loading scheme assigns appropriate signal constella tion sizes and transmit power levels (modulation type) across all subcarriers and is adapted to the varying channel conditions such that the average system error-rate (SER/BER) is minimised. When configured for a constant data-rate load, simulation results show significant performance gains over non-adaptive systems. In addition to the above studies, the simulation framework developed in this work is applied to investigate the performance of other signal processing techniques for multimedia communication such as blind channel equalization, and to examine the effectiveness of a secure communication system based on a logistic chaotic generator (LCG) for chaos shift-keying (CSK)
Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code Design
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᔹ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᔹ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1.
In this thesis, we will look further beyond all existing and proposed video coding standards,
and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᔹ
can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous
encoded frames Sj, while the corresponding decoder can use only all
previous encoded frames. We consider all studies, comparisons, and designs on causal video coding
from an information theoretic
point of view.
Let R*c(Dâ,...,D_N) (R*p(Dâ,...,D_N), respectively)
denote the minimum total rate required to achieve a given distortion
level Dâ,...,D_N > 0 in causal video coding (predictive video coding, respectively).
A novel computation
approach is proposed to analytically characterize, numerically
compute, and compare the
minimum total rate of causal video coding R*c(Dâ,...,D_N)
required to achieve a given distortion (quality) level Dâ,...,D_N > 0.
Specifically, we first show that for jointly stationary and ergodic
sources Xâ, ..., X_N, R*c(Dâ,...,D_N) is equal
to the infimum of the n-th order total rate distortion function
R_{c,n}(Dâ,...,D_N) over all n, where
R_{c,n}(Dâ,...,D_N) itself is given by the minimum of an
information quantity over a set of auxiliary random variables. We
then present an iterative algorithm for computing
R_{c,n}(Dâ,...,D_N) and demonstrate the convergence of the
algorithm to the global minimum. The global convergence of the
algorithm further enables us to not only establish a single-letter
characterization of R*c(Dâ,...,D_N) in a novel way when the
N sources are an independent and identically distributed (IID)
vector source, but also demonstrate
a somewhat surprising result (dubbed the more and less coding
theorem)---under some conditions on source frames and distortion,
the more frames need to be encoded and transmitted, the less amount
of data after encoding has to be actually sent.
With the help of the algorithm, it is also shown by example that
R*c(Dâ,...,D_N) is in general much smaller than the total rate
offered by the traditional greedy coding method by which each frame
is encoded in a local optimum manner based on all information
available to the encoder of the frame.
As a by-product, an extended Markov lemma is
established for correlated ergodic sources.
From an information theoretic point of view,
it is interesting to compare causal
video coding and predictive video coding,
which all existing video
coding standards proposed so far are based upon.
In this thesis, by fixing N=3,
we first derive a single-letter characterization
of R*p(Dâ,Dâ,Dâ) for an IID
vector source (Xâ,Xâ,Xâ) where Xâ and Xâ are independent, and then demonstrate the existence of such Xâ,Xâ,Xâ for which R*p(Dâ,Dâ,Dâ)>R*c(Dâ,Dâ,Dâ) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards.
The design of causal video coding is also considered in the thesis from an information
theoretic perspective by modeling each frame as a stationary information source.
We first put forth a concept called causal scalar quantization, and then
propose an algorithm for designing optimum fixed-rate causal scalar quantizers
for causal video coding to minimize the total distortion among all sources.
Simulation results show that in comparison with fixed-rate predictive scalar quantization,
fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction)