130 research outputs found
Quality aspects of Internet telephony
Internet telephony has had a tremendous impact on how people communicate.
Many now maintain contact using some form of Internet telephony.
Therefore the motivation for this work has been to address the quality aspects
of real-world Internet telephony for both fixed and wireless telecommunication.
The focus has been on the quality aspects of voice communication,
since poor quality leads often to user dissatisfaction. The scope of the work
has been broad in order to address the main factors within IP-based voice
communication.
The first four chapters of this dissertation constitute the background
material. The first chapter outlines where Internet telephony is deployed
today. It also motivates the topics and techniques used in this research.
The second chapter provides the background on Internet telephony including
signalling, speech coding and voice Internetworking. The third chapter
focuses solely on quality measures for packetised voice systems and finally
the fourth chapter is devoted to the history of voice research.
The appendix of this dissertation constitutes the research contributions.
It includes an examination of the access network, focusing on how calls are
multiplexed in wired and wireless systems. Subsequently in the wireless
case, we consider how to handover calls from 802.11 networks to the cellular
infrastructure. We then consider the Internet backbone where most of our
work is devoted to measurements specifically for Internet telephony. The
applications of these measurements have been estimating telephony arrival
processes, measuring call quality, and quantifying the trend in Internet telephony
quality over several years. We also consider the end systems, since
they are responsible for reconstructing a voice stream given loss and delay
constraints. Finally we estimate voice quality using the ITU proposal PESQ
and the packet loss process.
The main contribution of this work is a systematic examination of Internet
telephony. We describe several methods to enable adaptable solutions
for maintaining consistent voice quality. We have also found that relatively
small technical changes can lead to substantial user quality improvements.
A second contribution of this work is a suite of software tools designed to
ascertain voice quality in IP networks. Some of these tools are in use within
commercial systems today
A novel multimedia adaptation architecture and congestion control mechanism designed for real-time interactive applications
PhDThe increasing use of interactive multimedia applications over the Internet has created a problem of congestion. This is because a majority of these applications do not respond to congestion indicators. This leads to resource starvation for responsive flows, and ultimately excessive delay and losses for all flows therefore loss of quality. This results in unfair sharing of network resources and increasing the risk of network âcongestion collapseâ.
Current Congestion Control Mechanisms such as âTCP-Friendly Rate Controlâ (TFRC) have been able to achieve âfair-shareâ of network resource when competing with responsive flows such as TCP, but TFRCâs method of congestion response (i.e. to reduce Packet Rate) is not ideally matched for interactive multimedia applications which maintain a fixed Frame Rate. This mismatch of the two rates (Packet Rate and Frame Rate) leads to buffering of frames at the Sender Buffer resulting in delay and loss, and an unacceptable reduction of quality or complete loss of service for the end-user.
To address this issue, this thesis proposes a novel Congestion Control Mechanism which is referred to as âTCP-friendly rate control â Fine Grain Scalableâ (TFGS) for interactive multimedia applications.
This new approach allows multimedia frames (data) to be sent as soon as they are generated, so that the multimedia frames can reach the destination as quickly as possible, in order to provide an isochronous interactive service. This is done by maintaining the Packet Rate of the Congestion Control Mechanism (CCM) at a level equivalent to the Frame Rate of the Multimedia Encoder.The response to congestion is to truncate the Packet Size, hence reducing the overall bitrate of the multimedia stream. This functionality of the Congestion Control Mechanism is referred to as Packet Size Truncation (PST), and takes advantage of adaptive multimedia encoding, such as Fine Grain Scalable (FGS), where the multimedia frame is encoded in order of significance, Most to Least Significant Bits. The Multimedia Adaptation Manager (MAM) truncates the multimedia frame to the size indicated by the Packet Size Truncation function of the CCM, accurately mapping user demand to available network resource. Additionally Fine Grain Scalable encoding can offer scalability at byte level granularity, providing a true match to available network resources.
This approach has the benefits of achieving a âfair-shareâ of network resource when competing with responsive flows (as similar to TFRC CCM), but it also provides an isochronous service which is of crucial benefit to real-time interactive services. Furthermore, results illustrate that an increased number of interactive multimedia flows (such as voice) can be carried over congested networks whilst maintaining a quality level equivalent to that of a standard landline telephone. This is because the loss and delay arising from the buffering of frames at the Sender Buffer is completely removed. Packets sent maintain a fixed inter-packet-gap-spacing (IPGS). This results in a majority of packets arriving at the receiving end at tight time intervals. Hence, this avoids the need of using large Playout (de-jitter) Buffer sizes and adaptive Playout Buffer configurations. As a result this reduces delay, improves interactivity and Quality of Experience (QoE) of the multimedia application
System-level analysis of the tradeoffs between power saving and capacity/QoS with DRX in LTE
In an LTE cell, Discontinuous Reception (DRX) allows the central base station to configure User Equipment for periodic wake/sleep cycles, so as to save energy. Several parameters are associated to DRX operations, thus allowing for optimal performance with different traffic profiles (i.e., CBR-like, bursty, periodic arrivals of variable-sized packets, etc.). This work investigates how to configure these parameters and explores the tradeoff between power saving, on one side, and per-user QoS and cell capacity, on the other. Unlike previous work, mostly based on analytical models neglecting key aspects of LTE, our evaluation is carried out using a fully-fledged packet simulator. This allows us to discover previously unknown relationships and to propose configuration guidelines for operators
Secure and robust multi-constrained QoS aware routing algorithm for VANETs
Secure QoS routing algorithms are a fundamental part of wireless networks that aim to provide services with QoS and security guarantees. In Vehicular Ad hoc Networks (VANETs), vehicles perform routing functions, and at the same time act as end-systems thus routing control messages are transmitted unprotected over wireless channels. The QoS of the entire network could be degraded by an attack on the routing process, and manipulation of the routing control messages. In this paper, we propose a novel secure and reliable multi-constrained QoS aware routing algorithm for VANETs. We employ the Ant Colony Optimisation (ACO) technique to compute feasible routes in VANETs subject to multiple QoS constraints determined by the data traffic type. Moreover, we extend the VANET-oriented Evolving Graph (VoEG) model to perform plausibility checks on the exchanged routing control messages among vehicles. Simulation results show that the QoS can be guaranteed while applying security mechanisms to ensure a reliable and robust routing service
Enhancement of perceived quality of service for voice over internet protocol systems
Voice over Internet Protocol (WIP) applications are becoming more and more popular in
the telecommunication market. Packet switched V61P systems have many technical advantages
over conventional Public Switched Telephone Network (PSTN), including its efficient and flexible
use of the bandwidth, lower cost and enhanced security.
However, due to the IP network's "Best Effort" nature, voice quality are not naturally guaranteed
in the VoIP services. In fact, most current Vol]P services can not provide as good a voice
quality as PSTN. IP Network impairments such as packet loss, delay and jitter affect perceived
speech quality as do application layer impairment factors, such as codec rate and audio features.
Current perceived Quality of Service (QoS) methods are mainly designed to be used
in a PSTN/TDM environment and their performance in V6IP environment is unknown. It is a
challenge to measure perceived speech quality correctly in V61P system and to enhance user
perceived speech quality for VoIP system.
The main goal of this project is to evaluate the accuracy of the existing ITU-T speech quality
measurement method (Perceptual Evaluation of Speech Quality - PESQ) in mobile wireless
systems in the context of V61P, and to develop novel and efficient methods to enhance the user
perceived speech quality for emerging V61P services especially in mobile V61P environment.
The main contributions of the thesis are threefold:
(1) A new discovery of PESQ errors in mobile VoIP environment. A detailed investigation
of PESQ performance in mobile VoIP environment was undertaken and included setting up a
PESQ performance evaluation platform and testing over 1800 mobile-to-mobile and mobileto-
PSTN calls over a period of three months. The accuracy issues of PESQ algorithm was
investigated and main problems causing inaccurate PESQ score (improper time-alignment in
the PESQ algorithm) were discovered
.
Calibration issues for a safe and proper PESQ testing
in mobile environment were also discussed in the thesis.
(2) A new, simple-to-use, V611Pjit ter buffer algorithm. This was developed and implemented
in a commercial mobile handset. The algorithm, called "Play Late Algorithm", adaptively alters
the playout delay inside a speech talkspurt without introducing unnecessary extra end-to-end
delay. It can be used as a front-end to conventional static or adaptive jitter buffer algorithms
to provide improved performance. Results show that the proposed algorithm can increase user
perceived quality without consuming too much processing power when tested in live wireless
VbIP networks.
(3) A new QoS enhancement scheme. The new scheme combines the strengths of adaptive
codec bit rate (i. e. AMR 8-modes bit rate) and speech priority marking (i. e. giving high priority
for the beginning of a voiced segment). The results gathered on a simulation and emulation test
platform shows that the combined method provides a better user perceived speech quality than
separate adaptive sender bit rate or packet priority marking methods
A comprehensive simulation analysis of LTE Discontinuous Reception (DRX)
In an LTE cell, Discontinuous Reception (DRX) allows
the central base station to configure User Equipments for
periodic wake/sleep cycles, so as to save energy. DRX operations
depend on several parameters, which can be tuned to achieve optimal
performance with different traffic profiles (i.e., CBR vs.
bursty, periodic vs. sporadic, etc.). This work investigates how to
configure these parameters and explores the trade-off between
power saving, on one side, and per-user QoS, on the other. Unlike
previous work, chiefly based on analytical models neglecting key
aspects of LTE, our evaluation is carried out via simulation. We
use a fully-fledged packet simulator, which includes models of all
the protocol stack, the applications and the relevant QoS metrics,
and employ factorial analysis to assess the impact of the many
simulation factors in a statistically rigorous way. This allows us
to analyze a wider spectrum of scenarios, assessing the interplay
of the LTE mechanisms and DRX, and to derive configuration
guidelines
Delay aspects in Internet telephony
In this work, we address the transport of high quality voice over the Internet with a particular concern for delays. Transport of interactive audio over IP networks often suffers from packet loss and variations in the network delay (jitter). Forward Error Correction (FEC) mitigates the impact of packet loss at the expense of an increase of the end-to-end delay and the bit rate requirement of an audio source. Furthermore, adaptive playout buffer algorithms at the receiver compensate for jitter, but again this may come at the expense of additional delay. As a consequence, existing error control and playout adjustment schemes often have end-to-end delays exceeding 150 ms, which significantly impairs the perceived quality, while it would be more important to keep delay low and accept some small loss. We develop a joint playout buffer and FEC adjustment scheme for Internet Telephony that incorporates the impact of end-to-end delay on perceived audio quality. To this end, we take a utility function approach. We represent the perceived audio quality as a function of both the end-to-end delay and the distortion of the voice signal. We develop a joint rate/error/playout delay control algorithm which optimizes this measure of quality and is TCP-Friendly. It uses a channel model for both loss and delay. We validate our approach by simulation and show that (1) our scheme allows a source to increase its utility by avoiding increasing the playout delay when it is not really necessary and (2) it provides better quality than the adjustment schemes for playout and FEC that were previously published. We use this scheme in the framework of non-elevated services which allow applications to select a service class with reduced end-to-end delay at the expense of a higher loss rate. The tradeoff between delay and loss is not straightforward since audio sources may be forced to compensate the additional losses by more FEC and hence more delay. We show that the use of non-elevated services can lead to quality improvements, but that the choice of service depends on network conditions and on the importance that users attach to delay. Based on this observation, we propose an adaptive service choosing algorithm that allows audio sources to choose in real-time the service providing the highest audio quality. In addition, when used over the standard IP best effort service, an audio source should also control its rate in order to react to network congestion and to share the bandwidth in a fair way. Current congestion control mechanisms are based on packets (i.e., they aim to reduce or increase the number of packets sent per time interval to adjust to the current level of congestion in the network). However, voice is an inelastic traffic where packets are generated at regular intervals but packet size varies with the codec that is used. Therefore, standard congestion control is not directly applicable to this type of traffic. We present three alternative modifications to equation based congestion control protocols and evaluate them through mathematical analysis and network simulation
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
- âŠ