6 research outputs found
Time and frequency domain algorithms for speech coding
The promise of digital hardware economies (due to recent advances in
VLSI technology), has focussed much attention on more complex and sophisticated
speech coding algorithms which offer improved quality at relatively
low bit rates.
This thesis describes the results (obtained from computer simulations)
of research into various efficient (time and frequency domain) speech
encoders operating at a transmission bit rate of 16 Kbps.
In the time domain, Adaptive Differential Pulse Code Modulation (ADPCM)
systems employing both forward and backward adaptive prediction were
examined. A number of algorithms were proposed and evaluated, including
several variants of the Stochastic Approximation Predictor (SAP). A
Backward Block Adaptive (BBA) predictor was also developed and found to
outperform the conventional stochastic methods, even though its complexity
in terms of signal processing requirements is lower. A simplified
Adaptive Predictive Coder (APC) employing a single tap pitch predictor
considered next provided a slight improvement in performance over ADPCM,
but with rather greater complexity.
The ultimate test of any speech coding system is the perceptual performance
of the received speech. Recent research has indicated that this
may be enhanced by suitable control of the noise spectrum according to
the theory of auditory masking. Various noise shaping ADPCM
configurations were examined, and it was demonstrated that a proposed
pre-/post-filtering arrangement which exploits advantageously the
predictor-quantizer interaction, leads to the best subjective
performance in both forward and backward prediction systems.
Adaptive quantization is instrumental to the performance of ADPCM systems.
Both the forward adaptive quantizer (AQF) and the backward oneword
memory adaptation (AQJ) were examined. In addition, a novel method
of decreasing quantization noise in ADPCM-AQJ coders, which involves the
application of correction to the decoded speech samples, provided
reduced output noise across the spectrum, with considerable high frequency
noise suppression.
More powerful (and inevitably more complex) frequency domain speech
coders such as the Adaptive Transform Coder (ATC) and the Sub-band Coder
(SBC) offer good quality speech at 16 Kbps. To reduce complexity and
coding delay, whilst retaining the advantage of sub-band coding, a novel
transform based split-band coder (TSBC) was developed and found to compare
closely in performance with the SBC.
To prevent the heavy side information requirement associated with a
large number of bands in split-band coding schemes from impairing coding
accuracy, without forgoing the efficiency provided by adaptive bit
allocation, a method employing AQJs to code the sub-band signals together
with vector quantization of the bit allocation patterns was also
proposed.
Finally, 'pipeline' methods of bit allocation and step size estimation
(using the Fast Fourier Transform (FFT) on the input signal) were examined.
Such methods, although less accurate, are nevertheless useful in
limiting coding delay associated with SRC schemes employing Quadrature
Mirror Filters (QMF)
Quality aspects of Internet telephony
Internet telephony has had a tremendous impact on how people communicate.
Many now maintain contact using some form of Internet telephony.
Therefore the motivation for this work has been to address the quality aspects
of real-world Internet telephony for both fixed and wireless telecommunication.
The focus has been on the quality aspects of voice communication,
since poor quality leads often to user dissatisfaction. The scope of the work
has been broad in order to address the main factors within IP-based voice
communication.
The first four chapters of this dissertation constitute the background
material. The first chapter outlines where Internet telephony is deployed
today. It also motivates the topics and techniques used in this research.
The second chapter provides the background on Internet telephony including
signalling, speech coding and voice Internetworking. The third chapter
focuses solely on quality measures for packetised voice systems and finally
the fourth chapter is devoted to the history of voice research.
The appendix of this dissertation constitutes the research contributions.
It includes an examination of the access network, focusing on how calls are
multiplexed in wired and wireless systems. Subsequently in the wireless
case, we consider how to handover calls from 802.11 networks to the cellular
infrastructure. We then consider the Internet backbone where most of our
work is devoted to measurements specifically for Internet telephony. The
applications of these measurements have been estimating telephony arrival
processes, measuring call quality, and quantifying the trend in Internet telephony
quality over several years. We also consider the end systems, since
they are responsible for reconstructing a voice stream given loss and delay
constraints. Finally we estimate voice quality using the ITU proposal PESQ
and the packet loss process.
The main contribution of this work is a systematic examination of Internet
telephony. We describe several methods to enable adaptable solutions
for maintaining consistent voice quality. We have also found that relatively
small technical changes can lead to substantial user quality improvements.
A second contribution of this work is a suite of software tools designed to
ascertain voice quality in IP networks. Some of these tools are in use within
commercial systems today
Perceptual models in speech quality assessment and coding
The ever-increasing demand for good communications/toll
quality speech has created a renewed interest into the
perceptual impact of rate compression. Two general areas are
investigated in this work, namely speech quality assessment
and speech coding.
In the field of speech quality assessment, a model is
developed which simulates the processing stages of the
peripheral auditory system. At the output of the model a
"running" auditory spectrum is obtained. This represents
the auditory (spectral) equivalent of any acoustic sound such
as speech. Auditory spectra from coded speech segments serve
as inputs to a second model. This model simulates the
information centre in the brain which performs the speech
quality assessment. [Continues.
High-fidelity imaging : the computational models of the human visual system in high dynamic range video compression, visible difference prediction and image processing
As new displays and cameras offer enhanced color capabilities, there is a need to extend the precision of digital content. High Dynamic Range (HDR) imaging encodes images and video with higher than normal bit-depth precision, enabling representation of the complete color gamut and the full visible range of luminance. This thesis addresses three problems of HDR imaging: the measurement of visible distortions in HDR images, lossy compression for HDR video, and artifact-free image processing. To measure distortions in HDR images, we develop a visual difference predictor for HDR images that is based on a computational model of the human visual system. To address the problem of HDR image encoding and compression, we derive a perceptually motivated color space for HDR pixels that can efficiently encode all perceivable colors and distinguishable shades of brightness. We use the derived color space to extend the MPEG-4 video compression standard for encoding HDR movie sequences. We also propose a backward-compatible HDR MPEG compression algorithm that encodes both a low-dynamic range and an HDR video sequence into a single MPEG stream. Finally, we propose a framework for image processing in the contrast domain. The framework transforms an image into multi-resolution physical contrast images (maps), which are then rescaled in just-noticeable-difference (JND) units. The application of the framework is demonstrated with a contrast-enhancing tone mapping and a color to gray conversion that preserves color saliency.Aktuelle Innovationen in der Farbverarbeitung bei Bildschirmen und Kameras erzwingen eine Präzisionserweiterung bei digitalen Medien. High Dynamic Range (HDR) kodieren Bilder und Video mit einer grösseren Bittiefe pro Pixel, und ermöglichen damit die Darstellung des kompletten Farbraums und aller sichtbaren Helligkeitswerte. Diese Arbeit konzentriert sich auf drei Probleme in der HDR-Verarbeitung: Messung von für den Menschen störenden Fehlern in HDR-Bildern, verlustbehaftete Kompression von HDR-Video, und visuell verlustfreie HDR-Bildverarbeitung. Die Messung von HDR-Bildfehlern geschieht mittels einer Vorhersage von sichtbaren Unterschieden zweier HDR-Bilder. Die Vorhersage basiert dabei auf einer Modellierung der menschlichen Sehens. Wir addressieren die Kompression und Kodierung von HDR-Bildern mit der Ableitung eines perzeptuellen Farbraums für HDR-Pixel, der alle wahrnehmbaren Farben und deren unterscheidbaren Helligkeitsnuancen effizient abbildet. Danach verwenden wir diesen Farbraum für die Erweiterung des MPEG-4 Videokompressionsstandards, welcher sich hinfort auch für die Kodierung von HDR-Videosequenzen eignet. Wir unterbreiten weiters eine rückwärts-kompatible MPEG-Kompression von HDR-Material, welche die übliche YUV-Bildsequenz zusammen mit dessen HDRVersion in einen gemeinsamen MPEG-Strom bettet. Abschliessend erklären wir unser Framework zur Bildverarbeitung in der Kontrastdomäne. Das Framework transformiert Bilder in mehrere physikalische Kontrastauflösungen, um sie danach in Einheiten von just-noticeable-difference (JND, noch erkennbarem Unterschied) zu reskalieren. Wir demonstrieren den Nutzen dieses Frameworks anhand von einem kontrastverstärkenden Tone Mapping-Verfahren und einer Graukonvertierung, die die urspr ünglichen Farbkontraste bestmöglich beibehält