134 research outputs found

    Application-Level QoS: Improving video conferencing quality through sending the best packet next

    Get PDF
    In a traditional network stack, data from an application is transmitted in the order that it is received. An algorithm is proposed where information about the priority of packets and expiry times is used by the transport layer to reorder or discard packets at the time of transmission to optimise the use of available bandwidth. This can be used for video conferencing to prioritise important data. This scheme is implemented and compared to unmodified datagram congestion control protocol (DCCP). This algorithm is implemented as an interface to DCCP and tested using traffic modelled on video conferencing software. The results show improvement can be made to video conferencing during periods of congestion - substantially more audio packets arrive on time with the algorithm, which leads to higher quality video conferencing. In many cases video packet arrival rate also increases and adopting the algorithm gives improvements to video conferencing that are better than using unmodified queuing for DCCP. The algorithm proposed is implemented on the server only, so benefits can be obtained on the client without changes being required to the client

    Improving the Quality of Real Time Media Applications through Sending the Best Packet Next

    Get PDF
    Real time media applications such as video conferencing are increasing in usage. These bandwidth intensive applications put high demands on a network and often the quality experienced by the user is sub-optimal. In a traditional network stack, data from an application is transmitted in the order that it is received. This thesis proposes a scheme called "Send the Best Packet Next (SBPN)" where the most important data is transmitted first and data that will not reach the receiver before an expiry time is not transmitted. In SBPN the packet priority and expiry time are added to a packet and used in conjunction with the Round Trip Time (RTT) to determine whether packets are sent, and in which order that they are sent. For example, it has been shown that audio is more important to users than video in video conferencing. SBPN could be considered to be Quality of Service (QoS) that is within an application data stream. This is in comparison to network routers that provide QoS to whole streams such as Voice over IP (VoIP), but do not differentiate between data items within the stream or which data gets transmitted by the end nodes. Implementation of SBPN can be done on the server only, so that much of the benefit for one way transmission (e.g. live television) can be gained without requiring existing clients to be changed. SBPN was implemented in a Linux kernel on top of Datagram Congestion Control Protocol (DCCP) and compared to existing solutions. This showed real improvement in the measured quality of audio with a maximum improvement of 15% in selected test scenarios

    GTFRC, a TCP friendly QoS-aware rate control for diffserv assured service

    Get PDF
    This study addresses the end-to-end congestion control support over the DiffServ Assured Forwarding (AF) class. The resulting Assured Service (AS) provides a minimum level of throughput guarantee. In this context, this article describes a new end-to-end mechanism for continuous transfer based on TCP-Friendly Rate Control (TFRC). The proposed approach modifies TFRC to take into account the QoS negotiated. This mechanism, named gTFRC, is able to reach the minimum throughput guarantee whatever the flow’s RTT and target rate. Simulation measurements and implementation over a real QoS testbed demonstrate the efficiency of this mechanism either in over-provisioned or exactly-provisioned network. In addition, we show that the gTFRC mechanism can be used in the same DiffServ/AF class with TCP or TFRC flows

    Performance evaluation of TCP, UDP and DCCP for video traffics over 4G network

    Get PDF
    Fourth Generation (4G) system has been used more widely than the older generations 3G and 2G. Among the reasons are that the 4G’s transfer rate is higher and it supports all multimedia functions. Besides, its’ supports for wide geographical locus makes wireless technology gets more advanced. The essential goal of 4G is to enable voice-based communication being implemented endlessly. To achieve the goal, this study tries to answer the following research questions: (1), are the old protocols suit with this new technology; (2), which one has the best performance and, (3) which one has the greatest effect on throughput, delay, packet delivery ratio and packet loss. The aforementioned questions are crucial in the performance evaluation of the most famous protocols (particularly User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Datagram Congestion Control Protocol (DCCP)) within the 4G environment. Through the Network Simulator-3 (NS-3), the performance of transporting MPEG-4 video stream including throughput, delay, packet loss, and packet delivery ratio are analyzed at the base station through UDP, TCP, and DCCP protocols over 4G’s Long Term Evolution (LTE) technology. The results show that DCCP has better throughput, and lesser delay, but at the same time it has more packet loss than UDP and TCP. Based on the results, DCCP is recommended as a transport protocol for real time vide

    Analysis of Two-Layer Protocols: DCCP Simultaneous-Open and Hole Punching Procedures

    Get PDF
    The simultaneous-open procedure of the Datagram Congestion Control Protocol (DCCP), RFC 5596, was published in September 2009. Its design aims to overcome DCCP weaknesses when the Server is behind a middle box, such as Network Address Translators or firewalls. The original DCCP specification, RFC 4340, only allows the Client to initiate the call. The call request cannot reach the Server behind the middle box. A widely used solution to address this problem is called the hole punching technique. This technique requires the Server to initiate sending packets. Using Coloured Petri Nets (CPN) this paper models and analyses the DCCP procedure specified in RFC 5596. However, the difficulty is that detailed modelling of the address translation is also required. This causes state space explosion. We alleviate the state explosion using prioritized transitions and the sweep-line technique. Modelling and analysis approaches are discussed in the hope that it is helpful for others who wish to analyse similar protocols. Analysis results are also obtained for the simultaneous-open procedure specified in RFC 5596

    Reflections on security options for the real-time transport protocol framework

    Get PDF
    The Real-time Transport Protocol (RTP) supports a range of video conferencing, telephony, and streaming video ap- plications, but offers few native security features. We discuss the problem of securing RTP, considering the range of applications. We outline why this makes RTP a difficult protocol to secure, and describe the approach we have recently proposed in the IETF to provide security for RTP applications. This approach treats RTP as a framework with a set of extensible security building blocks, and prescribes mandatory-to-implement security at the level of different application classes, rather than at the level of the media transport protocol

    De-ossifying the Internet Transport Layer : A Survey and Future Perspectives

    Get PDF
    ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their useful suggestions and comments.Peer reviewedPublisher PD

    Synchronization of streamed audio between multiple playback devices over an unmanaged IP network

    Get PDF
    When designing and implementing a prototype supporting inter-destination media synchronization – synchronized playback between multiple devices receiving the same stream – there are a lot of aspects that need to be considered, especially when working with unmanaged networks. Not only is a proper streaming protocol essential, but also a way to obtain and maintain the synchronization of the clocks of the devices. The thesis had a few constraints, namely that the server producing the stream should be written for the .NET-platform and that the clients receiving it should be using the media framework GStreamer. This framework provides methods for both achieving synchronization as well as resynchronization. As the provided resynchro- nization methods introduced distortions in the audio, an alternative method was implemented. This method focused on minimizing the distortions, thus maintain- ing a smooth playback. After the prototype had been implemented, it was tested to see how well it performed under the influence of packet loss and delay. The accuracy of the synchronization was also tested under optimal conditions using two different time synchronization protocols. What could be concluded from this was that a good synchronization could be maintained on unloaded networks using the proposed method, but when introducing delay the prototype struggled more. This was mainly due to the usage of the Network Time Protocol (NTP), which is known to perform badly on networks with asymmetric paths.When working with synchronized playback it is not enough just obtain- ing it – it also needs to be maintained. Implementing a prototype thus involves many parts ranging from choosing a proper streaming protocol, to handling glitch free resynchronization of audio. Synchronization between multiple speakers has a wide area of application, ranging from home entertainment solutions to big malls where announcements should appear synchronized over the entire perimeter. In order to achieve this, two main parts are involved: the streaming of the audio, and the actual synchronization. The streaming itself poses problems mostly since the prototype should not only work on dedicated networks, but rather on all kinds, such as the Internet. As the information over these networks are transmitted in packets, and the path from source to destination crosses many sub networks, the packets may be delayed or even lost. This may create an audible distortion in the playback. The next part is the synchronization. This is most easily achieved by putting a time on each packet stating when in the future it should be played out. If then all receivers play it back at the specified time, synchronization is achieved. This however requires that all the receivers share the idea of when a specific time is – the clocks at all the receivers must be synchronized. By using existing software and hardware solutions, such as the Network Time Protocol (NTP) or the Precision Time Protocol (PTP), this can be accomplished. The accuracy of the synchronization is therefore partly dependent on how well these solutions work. Another valid aspect is how accurate the synchronization must be for the sound to be perceived as synchronized by humans. This is usually in the range of a few tens of milliseconds to five milliseconds depending on the sound. When a global time has been distributed to all receivers, matters get more complicated as there is more than one clock to consider at each receiver. Apart from the previously mentioned clock, now called the ’system clock’, there is also an audio clock, which is a hardware clock positioned on the sound card. This audio clock decides the rate at which media is played out. Altering the system clock to synchronize it to a common time is one thing, but altering the audio clock while media is being played will inevitably mean a jump in the playback, and thus a distortion. Although an initial synchronization can be achieved, the two clocks will over time tick in slightly different pace, thus drifting away from each other. This creates a need for the audio clock to continuously correct itself to follow the system clock. In the media framework GStreamer, used for handling the media at the re- ceivers, two alternatives to solve the correction problem were available. Quick evaluations of these two methods however showed that either audible glitches or ’oscillations’ occurred in the sound, when the clocks were corrected. A new method, which basically combines the two existing, was therefore implemented. With this method the audio clock is continuously corrected, but in a smaller and less aggressive way. Listening tests revealed much smaller, often not audible, distortions, while the synchronization performance was at par with the existing methods. More thorough testing showed that the synchronization over networks with light traffic was in the microsecond-range, thus far below the threshold of what will appear as synchronized. During worse conditions – simulated hostile environments – the synchronization quickly reached unacceptable levels though. This was due to the previously mentioned NTP, and not the implemented method on the other hand

    Quantitative analysis of the effects queuing has on a CCID3 controlled DCCP flow

    Get PDF
    While the Data Congestion Control Protocol (DCCP) shows much promise at becoming a protocol of choice for real-time applications in the future, there are relatively speaking, only a small number of academic papers purporting to its performance and its various nuances. This paper will describe the effects queuing and in particular queue sizes have on DCCP when CCID3 is selected as the congestion control mechanism. From results obtained through the experimentation described in this paper, a clear trade-off between packet loss rates and packet latency values was found to occur when different queue sizes were employed on the experimental network. It was found that employing small fixed sized queues on the network led to lower packet latencies but higher volumes of packet loss as a result of the queue size reaching its maximum threshold more frequently. Alternatively when large queue sizes were used, the number of packet loss events reduced significantly however, packet latency values increased. In addition to showing this impending trade-off empirically, this paper describes ways in which this phenomenon could potentially be exploited to allow DCCP to offer applications with a more tailored form of transportation protocol based on their particular needs

    Greediness control algorithm for multimedia streaming in wireless local area networks

    Get PDF
    This work investigates the interaction between the application and transport layers while streaming multimedia in a residential Wireless Local Area Network (WLAN). Inconsistencies have been identified that can have a severe impact on the Quality of Experience (QoE) experienced by end users. This problem arises as a result of the streaming processes reliance on rate adaptation engines based on congestion avoidance mechanisms, that try to obtain as much bandwidth as possible from the limited network resources. These upper transport layer mechanisms have no knowledge of the media which they are carrying and as a result treat all traffic equally. This lack of knowledge of the media carried and the characteristics of the target devices results in fair bandwidth distribution at the transport layer but creates unfairness at the application layer. This unfairness mostly affects user perceived quality when streaming high quality multimedia. Essentially, bandwidth that is distributed fairly between competing video streams at the transport layer results in unfair application layer video quality distribution. Therefore, there is a need to allow application layer streaming solutions, tune the aggressiveness of transport layer congestion control mechanisms, in order to create application layer QoE fairness between competing media streams, by taking their device characteristics into account. This thesis proposes the Greediness Control Algorithm (GCA), an upper transport layer mechanism that eliminates quality inconsistencies caused by rate / congestion control mechanisms while streaming multimedia in wireless networks. GCA extends an existing solution (i.e. TCP Friendly Rate Control (TFRC)) by introducing two parameters that allow the streaming application to tune the aggressiveness of the rate estimation and as a result, introduce fair distribution of quality at the application layer. The thesis shows that this rate adaptation technique, combined with a scalable video format allows increased overall system QoE. Extensive simulation analysis demonstrate that this form of rate adaptation increases the overall user QoE achieved via a number of devices operating within the same home WLAN
    corecore