20 research outputs found

    Quality aspects of Internet telephony

    Get PDF
    Internet telephony has had a tremendous impact on how people communicate. Many now maintain contact using some form of Internet telephony. Therefore the motivation for this work has been to address the quality aspects of real-world Internet telephony for both fixed and wireless telecommunication. The focus has been on the quality aspects of voice communication, since poor quality leads often to user dissatisfaction. The scope of the work has been broad in order to address the main factors within IP-based voice communication. The first four chapters of this dissertation constitute the background material. The first chapter outlines where Internet telephony is deployed today. It also motivates the topics and techniques used in this research. The second chapter provides the background on Internet telephony including signalling, speech coding and voice Internetworking. The third chapter focuses solely on quality measures for packetised voice systems and finally the fourth chapter is devoted to the history of voice research. The appendix of this dissertation constitutes the research contributions. It includes an examination of the access network, focusing on how calls are multiplexed in wired and wireless systems. Subsequently in the wireless case, we consider how to handover calls from 802.11 networks to the cellular infrastructure. We then consider the Internet backbone where most of our work is devoted to measurements specifically for Internet telephony. The applications of these measurements have been estimating telephony arrival processes, measuring call quality, and quantifying the trend in Internet telephony quality over several years. We also consider the end systems, since they are responsible for reconstructing a voice stream given loss and delay constraints. Finally we estimate voice quality using the ITU proposal PESQ and the packet loss process. The main contribution of this work is a systematic examination of Internet telephony. We describe several methods to enable adaptable solutions for maintaining consistent voice quality. We have also found that relatively small technical changes can lead to substantial user quality improvements. A second contribution of this work is a suite of software tools designed to ascertain voice quality in IP networks. Some of these tools are in use within commercial systems today

    A MODEL FOR PREDICTING THE PERFORMANCE OF IP VIDEOCONFERENCING

    Get PDF
    With the incorporation of free desktop videoconferencing (DVC) software on the majority of the world's PCs, over the recent years, there has, inevitably, been considerable interest in using DVC over the Internet. The growing popularity of DVC increases the need for multimedia quality assessment. However, the task of predicting the perceived multimedia quality over the Internet Protocol (IP) networks is complicated by the fact that the audio and video streams are susceptible to unique impairments due to the unpredictable nature of IP networks, different types of task scenarios, different levels of complexity, and other related factors. To date, a standard consensus to define the IP media Quality of Service (QoS) has yet to be implemented. The thesis addresses this problem by investigating a new approach to assess the quality of audio, video, and audiovisual overall as perceived in low cost DVC systems. The main aim of the thesis is to investigate current methods used to assess the perceived IP media quality, and then propose a model which will predict the quality of audiovisual experience from prevailing network parameters. This thesis investigates the effects of various traffic conditions, such as, packet loss, jitter, and delay and other factors that may influence end user acceptance, when low cost DVC is used over the Internet. It also investigates the interaction effects between the audio and video media, and the issues involving the lip sychronisation error. The thesis provides the empirical evidence that the subjective mean opinion score (MOS) of the perceived multimedia quality is unaffected by lip synchronisation error in low cost DVC systems. The data-gathering approach that is advocated in this thesis involves both field and laboratory trials to enable the comparisons of results between classroom-based experiments and real-world environments to be made, and to provide actual real-world confirmation of the bench tests. The subjective test method was employed since it has been proven to be more robust and suitable for the research studies, as compared to objective testing techniques. The MOS results, and the number of observations obtained, have enabled a set of criteria to be established that can be used to determine the acceptable QoS for given network conditions and task scenarios. Based upon these comprehensive findings, the final contribution of the thesis is the proposal of a new adaptive architecture method that is intended to enable the performance of IP based DVC of a particular session to be predicted for a given network condition

    Analysis of generic discrete-time buffer models with irregular packet arrival patterns

    Get PDF
    De kwaliteit van de multimediadiensten die worden aangeboden over de huidige breedband-communicatienetwerken, wordt in hoge mate bepaald door de performantie van de buffers die zich in de diverse netwerkele-menten (zoals schakelknooppunten, routers, modems, toegangsmultiplexers, netwerkinter- faces, ...) bevinden. In dit proefschrift bestuderen we de performantie van een dergelijke buffer met behulp van een geschikt stochastisch discrete-tijd wachtlijnmodel, waarbij we het geval van meerdere uitgangskanalen en (niet noodzakelijk identieke) pakketbronnen beschouwen, en de pakkettransmissietijden in eerste instantie één slot bedragen. De grillige, of gecorreleerde, aard van een pakketstroom die door een bron wordt gegenereerd, wordt gekarakteriseerd aan de hand van een algemeen D-BMAP (discrete-batch Markovian arrival process), wat een generiek kader creëert voor het beschrijven van een superpositie van dergelijke informatiestromen. In een later stadium breiden we onze studie uit tot het geval van transmissietijden met een algemene verdeling, waarbij we ons beperken tot een buffer met één enkel uitgangskanaal. De analyse van deze wachtlijnmodellen gebeurt hoofdzakelijk aan de hand van een particuliere wiskundig-analytische aanpak waarbij uitvoerig gebruik gemaakt wordt van probabiliteitsgenererende functies, die er toe leidt dat de diverse performantiematen (min of meer expliciet) kunnen worden uitgedrukt als functie van de systeemparameters. Dit resul-teert op zijn beurt in efficiënte en accurate berekeningsalgoritmen voor deze grootheden, die op relatief eenvoudige wijze geïmplementeerd kunnen worden

    Application of learning algorithms to traffic management in integrated services networks.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN027131 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    VIRTUAL MEMORY ON A MANY-CORE NOC

    Get PDF
    Many-core devices are likely to become increasingly common in real-time and embedded systems as computational demands grow and as expectations for higher performance can generally only be met by by increasing core numbers rather than relying on higher clock speeds. Network-on-chip devices, where multiple cores share a single slice of silicon and employ packetised communications, are a widely-deployed many-core option for system designers. As NoCs are expected to run larger and more complex programs, the small amount of fast, on-chip memory available to each core is unlikely to be sufficient for all but the simplest of tasks, and it is necessary to find an efficient, effective, and time-bounded, means of accessing resources stored in off-chip memory, such as DRAM or Flash storage. The abstraction of paged virtual memory is a familiar technique to manage similar tasks in general computing but has often been shunned by real-time developers because of concern about time predictability. We show it can be a poor choice for a many-core NoC system as, unmodified, it typically uses page sizes optimised for interaction with spinning disks and not solid state media, and transports significant volumes of subsequently unused data across already congested links. In this work we outline and simulate an efficient partial paging algorithm where only those memory resources that are locally accessed are transported between global and local storage. We further show that smaller page sizes add to efficiency. We examine the factors that lead to timing delays in such systems, and show we can predict worst case execution times at even safety-critical thresholds by using statistical methods from extreme value theory. We also show these results are applicable to systems with a variety of connections to memory

    Deployable transport services for low-latency multimedia applications

    Get PDF
    Low-latency multimedia applications generate a growing and significant majority of all Internet traffic. These applications are characterised by tight bounds on end-to-end latency that typically range from tens to a few hundred milliseconds. Operating within these bounds is challenging, with the best-effort delivery service of the Internet giving rise to unreliable delivery with unpredictable latency. The way in which the upper layers of the protocol stack manage this unreliability and unpredictability can greatly impact the quality-of-experience that applications can provide. In this thesis, I focus on the services and abstractions that the transport layer provides to applications. The delivery model provided by the transport layer can have a significant impact on the quality-of-experience that can be provided by the application. Reliability and order, for example, introduce delay while packet loss is detected and the lost data retransmitted. This enforces a particular trade-off between latency, loss, and application quality-of-experience, with reliability taking priority. This trade-off is not suitable for low-latency multimedia applications, which prefer predictable and bounded latency to strict reliability and order. No widely-deployed transport protocol provides a delivery model that fully supports low-latency applications: UDP provides no reliability guarantees, while TCP enforces reliability. Implementing a protocol that does support these applications is difficult: ossification restricts protocols to appearing as UDP or TCP on-the-wire. To meet both challenges -- of better supporting low-latency multimedia applications, and of deploying a new protocol within an ossified transport layer -- I propose TCP Hollywood, a protocol that maintains wire compatibility with TCP, while exposing the trade-off between reliability and delay such that applications can improve their quality-of-experience. I show that TCP Hollywood is deployable on the public Internet, and that it achieves its goal of improving support for low-latency multimedia applications. I conclude by evaluating the API changes that are required to support TCP Hollywood, distilling the protocol into the set of transport services that it provides

    Improved learning automata applied to routing in multi-service networks

    Get PDF
    Multi-service communications networks are generally designed, provisioned and configured, based on source-destination user demands expected to occur over a recurring time period. However due to network users' actions being non-deterministic, actual user demands will vary from those expected, potentially causing some network resources to be under- provisioned, with others possibly over-provisioned. As actual user demands vary over the recurring time period from those expected, so the status of the various shared network resources may also vary. This high degree of uncertainty necessitates using adaptive resource allocation mechanisms to share the finite network resources more efficiently so that more of actual user demands may be accommodated onto the network. The overhead for these adaptive resource allocation mechanisms must be low in order to scale for use in large networks carrying many source-destination user demands. This thesis examines the use of stochastic learning automata for the adaptive routing problem (these being adaptive, distributed and simple in implementation and operation) and seeks to improve their weakness of slow convergence whilst maintaining their strength of subsequent near optimal performance. Firstly, current reinforcement algorithms (the part causing the automaton to learn) are examined for applicability, and contrary to the literature the discretised schemes are found in general to be unsuitable. Two algorithms are chosen (one with fast convergence, the other with good subsequent performance) and are improved through automatically adapting the learning rates and automatically switching between the two algorithms. Both novel methods use local entropy of action probabilities for determining convergence state. However when the convergence speed and blocking probability is compared to a bandwidth-based dynamic link-state shortest-path algorithm, the latter is found to be superior. A novel re-application of learning automata to the routing problem is therefore proposed: using link utilisation levels instead of call acceptance or packet delay. Learning automata now return a lower blocking probability than the dynamic shortest-path based scheme under realistic loading levels, but still suffer from a significant number of convergence iterations. Therefore the final improvement is to combine both learning automata and shortest-path concepts to form a hybrid algorithm. The resulting blocking probability of this novel routing algorithm is superior to either algorithm, even when using trend user demands

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Large scale collaborative virtual environments

    Get PDF
    [N.B. Pagination of eThesis differs from printed thesis. The content is identical.] This thesis is concerned with the theory, design, realisation and evaluation of large-scale collaborative virtual environments. These are 3D audio-graphical computer generated environments which actively support collaboration between potentially large numbers of distributed users. The approach taken in this thesis reflects both the sociology of interpersonal communication and the management of communication in distributed systems. The first part of this thesis presents and evaluates MASSIVE-1, a virtual reality tele-conferencing system which implements the spatial model of interaction of Benford and Fahlén. The evaluation of MASSIVE-1 has two components: a user-oriented evaluation of the system’s facilities and the underlying awareness model; and a network-oriented evaluation and modelling of the communication requirements of the system with varying numbers of users. This thesis proposes the “third party object” concept as an extension to the spatial model of interaction. Third party objects can be used to represent the influence of context or environment on interaction and awareness, for example, the effects of boundaries, rooms and crowds. Third party objects can also be used to introduce and manage dynamic aggregates or abstractions within the environments (for example abstract overviews of distant crowds of participants). The third party object concept is prototyped in a second system, MASSIVE-2. MASSIVE-2 is also evaluated in two stages. The first is a user-oriented reflection on the capabilities and effectiveness of the third party concept as realised in the system. The second stage of the evaluation develops a predictive model of total and per-participant network bandwidth requirements for systems of this kind. This is used to analyse a number of design decisions relating to this type of system, including the use of multicasting and the form of communication management adopted
    corecore