2,895 research outputs found

    Application-Level QoS: Improving video conferencing quality through sending the best packet next

    Get PDF
    In a traditional network stack, data from an application is transmitted in the order that it is received. An algorithm is proposed where information about the priority of packets and expiry times is used by the transport layer to reorder or discard packets at the time of transmission to optimise the use of available bandwidth. This can be used for video conferencing to prioritise important data. This scheme is implemented and compared to unmodified datagram congestion control protocol (DCCP). This algorithm is implemented as an interface to DCCP and tested using traffic modelled on video conferencing software. The results show improvement can be made to video conferencing during periods of congestion - substantially more audio packets arrive on time with the algorithm, which leads to higher quality video conferencing. In many cases video packet arrival rate also increases and adopting the algorithm gives improvements to video conferencing that are better than using unmodified queuing for DCCP. The algorithm proposed is implemented on the server only, so benefits can be obtained on the client without changes being required to the client

    On the quality of VoIP with DCCP for satellite communications

    Get PDF
    We present experimental results for the performance of selected voice codecs using DCCP with CCID4 congestion control over a satellite link. We evaluate the performance of both constant and variable data rate speech codecs for a number of simultaneous calls using the ITU E-model. We analyse the sources of packet losses and additionally analyse the effect of jitter which is one of the crucial parameters contributing to VoIP quality and has, to the best of our knowledge, not been considered previously in the published DCCP performance results. We propose modifications to the CCID4 algorithm and demonstrate how these improve the VoIP performance, without the need for additional link information other than what is already monitored by CCID4. We also demonstrate the fairness of the proposed modifications to other flows. Although the recently adopted changes to TFRC specification alleviate some of the performance issues for VoIP on satellite links, we argue that the characteristics of commercial satellite links necessitate consideration of further improvements. We identify the additional benefit of DCCP when used in VoIP admission control mechanisms and draw conclusions about the advantages and disadvantages of the proposed DCCP/CCID4 congestion control mechanism for use with VoIP applications

    Improving the Quality of Real Time Media Applications through Sending the Best Packet Next

    Get PDF
    Real time media applications such as video conferencing are increasing in usage. These bandwidth intensive applications put high demands on a network and often the quality experienced by the user is sub-optimal. In a traditional network stack, data from an application is transmitted in the order that it is received. This thesis proposes a scheme called "Send the Best Packet Next (SBPN)" where the most important data is transmitted first and data that will not reach the receiver before an expiry time is not transmitted. In SBPN the packet priority and expiry time are added to a packet and used in conjunction with the Round Trip Time (RTT) to determine whether packets are sent, and in which order that they are sent. For example, it has been shown that audio is more important to users than video in video conferencing. SBPN could be considered to be Quality of Service (QoS) that is within an application data stream. This is in comparison to network routers that provide QoS to whole streams such as Voice over IP (VoIP), but do not differentiate between data items within the stream or which data gets transmitted by the end nodes. Implementation of SBPN can be done on the server only, so that much of the benefit for one way transmission (e.g. live television) can be gained without requiring existing clients to be changed. SBPN was implemented in a Linux kernel on top of Datagram Congestion Control Protocol (DCCP) and compared to existing solutions. This showed real improvement in the measured quality of audio with a maximum improvement of 15% in selected test scenarios

    Collecting and Analyzing Failure Data of Bluetooth Personal Area Networks

    Get PDF
    This work presents a failure data analysis campaign on Bluetooth Personal Area Networks (PANs) conducted on two kind of heterogeneous testbeds (working for more than one year). The obtained results reveal how failures distribution are characterized and suggest how to improve the dependability of Bluetooth PANs. Specically, we dene the failure model and we then identify the most effective recovery actions and masking strategies that can be adopted for each failure. We then integrate the discovered recovery actions and masking strategies in our testbeds, improving the availability and the reliability of 3.64% (up to 36.6%) and 202% (referred to the Mean Time To Failure), respectively

    Resource Management in Container-based Mobile Edge Computing

    Get PDF
    Mobile edge computing is a promising technology which provides support to time-sensitive applications by pushing centralized cloud processing capabilities to distributed Fog nodes. These fog nodes are deployed at one-hop distance from end-user and provide real-time data processing capabilities at the edge of network. Due to service provisioning at the edge of network, no congestion occurs at the core of network, quality of service (QoS) is improved and the overall network operational cost is significantly reduced. However, these nodes have limited capabilities such as processing, storage and coverage so, they face challenge of mobility support for a mobile user when continued service (i.e. zero downtime) is required during handovers between edge nodes. Furthermore, they also need an effective task allocation and resource management strategy to ensure smooth operation of edge services. Unlike traditional VM based environment in Fog Computing, this work explores lightweight Docker containers to deploy and migrate services. In this work, an interactive event-driven dashboard is developed for real-time edge node registration, system monitoring, service initiation and migration. Then, motivated by Fog Following Me, a couple of resource allocation schemes (i.e. algorithm-I & II) have been introduced to dynamically manage the compute resources among fog nodes. For smooth service operation and stable migration, an application profiling feature has been introduced which assigns the needed quota for an application requirement in terms of CPU, GPU and RAM. The developed system's performance is evaluated by conducting various experiments. The experimental results clearly demonstrate and verify the working feasibility of the whole system's operation in context of edge computing. However, the observed processing delays during service migration marks the limitation of Docker and suggest the need to use latest optimization tools to cut down the network delays and ensure zero-downtime service migration

    High availability using virtualization

    Get PDF
    High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization. Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This new approach to high availability allows to share the running virtual machines over the servers up and running, by exploiting the features of the virtualization layer: start, stop and move virtual machines between physical hosts. The system (3RC) is based on a finite state machine with hysteresis, providing the possibility to restart each virtual machine over any physical host, or reinstall it from scratch. A complete infrastructure has been developed to install operating system and middleware in a few minutes. To virtualize the main servers of a data center, a new procedure has been developed to migrate physical to virtual hosts. The whole Grid data center SNS-PISA is running at the moment in virtual environment under the high availability system. As extension of the 3RC architecture, several storage solutions have been tested to store and centralize all the virtual disks, from NAS to SAN, to grant data safety and access from everywhere. Exploiting virtualization and ability to automatically reinstall a host, we provide a sort of host on-demand, where the action on a virtual machine is performed only when a disaster occurs.Comment: PhD Thesis in Information Technology Engineering: Electronics, Computer Science, Telecommunications, pp. 94, University of Pisa [Italy
    • 

    corecore