405 research outputs found

    Scalable reliable on-demand media streaming protocols

    Get PDF
    This thesis considers the problem of delivering streaming media, on-demand, to potentially large numbers of concurrent clients. The problem has motivated the development in prior work of scalable protocols based on multicast or broadcast. However, previous protocols do not allow clients to efficiently: 1) recover from packet loss; 2) share bandwidth fairly with competing flows; or 3) maximize the playback quality at the client for any given client reception rate characteristics. In this work, new protocols, namely Reliable Periodic Broadcast (RPB) and Reliable Bandwidth Skimming (RBS), are developed that efficiently recover from packet loss and achieve close to the best possible server bandwidth scalability for a given set of client characteristics. To share bandwidth fairly with competing traffic such as TCP, these protocols can employ the Vegas Multicast Rate Control (VMRC) protocol proposed in this work. The VMRC protocol exhibits TCP Vegas-like behavior. In comparison to prior rate control protocols, VMRC provides less oscillatory reception rates to clients, and operates without inducing packet loss when the bottleneck link is lightly loaded. The VMRC protocol incorporates a new technique for dynamically adjusting the TCP Vegas threshold parameters based on measured characteristics of the network. This technique implements fair sharing of network resources with other types of competing flows, including widely deployed versions of TCP such as TCP Reno. This fair sharing is not possible with the previously defined static Vegas threshold parameters. The RPB protocol is extended to efficiently support quality adaptation. The Optimized Heterogeneous Periodic Broadcast (HPB) is designed to support a range of client reception rates and efficiently support static quality adaptation by allowing clients to work-ahead before beginning playback to receive a media file of the desired quality. A dynamic quality adaptation technique is developed and evaluated which allows clients to achieve more uniform playback quality given time-varying client reception rates

    Improving TCP behaviour to non-invasively share spectrum with safety messages in VANET

    Get PDF
    There is a broad range of technologies available for wireless communications for moving vehicles, such as Worldwide Interoperability for Microwave Access (WiMax), 3G, Dedicated Short Range Communication (DSRC)/ Wireless Access for Vehicular Environment (WAVE) and Mobile Broadband Wireless Access (MBWA). These technologies are needed to support delay-sensitive safety related applications such as collision avoidance and emergency breaking. Among them, the IEEE802.11p standard (aka DSRC/WAVE), a Wi-Fi based medium RF range technology, is considered to be one of the best suited draft architectures for time-sensitive safety applications. In addition to safety applications, however, services of non-safety nature like electronic toll tax collection, infotainment and traffic control are also becoming important these days. To support delay-insensitive infotainment applications, the DSRC protocol suite also provides facilities to use Internet Protocols. The DSRC architecture actually consists of WAVE Short Messaging Protocol (WSMP) specifically formulated for realtime safety applications as well as the conventional transport layer protocols TCP/UDP for non-safety purposes. But the layer four protocol TCP was originally designed for reliable data delivery only over wired networks, and so the performance quality was not guaranteed for the wireless medium, especially in the highly unstable network topology engendered by fast moving vehicles. The vehicular wireless medium is inherently unreliable because of intermittent disconnections caused by moving vehicles, and in addition, it suffers from multi-path and fading phenomena (and a host of others) that greatly degrade the network performance. One of the TCP problems in the context of vehicular wireless network is that it interprets transmission errors as symptomatic of an incipient congestion situation and as a result, reduces the throughput deliberately by frequently invoking slow-start congestion control algorithms. Despite the availability of many congestion control mechanisms to address this problem, the conventional TCP continues to suffer from poor performance when deployed in the Vehicular Ad-hoc Network (VANET) environment. Moreover, the way non-safety applications, when pressed into service, will treat the existing delay-sensitive safety messaging applications and the way these two types of applications interact between them are not (well) understood, and therefore, in order for them to coexist, the implication and repercussion need to be examined closely. This is especially important as IEEE 802.11p standards are not designed keeping in view the issues TCP raises in relation to safety messages. This dissertation addresses the issues arising out of this situation and in particular confronts the congestion challenges thrown up in the context of heterogenous communication in VANET environment by proposing an innovative solution with two optimized congestion control algorithms. Extensive simulation studies conducted by the author shows that both these algorithms have improved TCP performance in terms of metrics like Packet Delivery Fraction (PDF), Packet Loss and End-to-End Delay (E2ED), and at the same time they encourage the non-safety TCP application to behave unobtrusively and cooperatively to a large extent with DSRC’s safety applications. The first algorithm, called vScalable-TCP – a modification of the existing TCPScalable variant – introduces a reliable transport protocol suitable for DSRC. In the proposed approach, whenever packets are discarded excessively due to congestion, the slow-start mechanism is purposely suppressed temporarily to avoid further congestion and packet loss. The crucial idea here is how to adjust and regulate the behaviour of vScalable-TCP in a way that the existing safety message flows are least disturbed. The simulation results confirm that the new vScalable-TCP provides better performance for real-time safety applications than TCP-Reno and other TCP variants considered in this thesis in terms of standard performance metrics. The second algorithm, named vLP-TCP – a modification of the existing TCP-LP variant – is designed to test and demonstrate that the strategy developed for vScalable-TCP is also compatible with another congestion control mechanism and achieves the same purpose. This expectation is borne out well by the simulation results. The same slow-start congestion management strategy has been employed but with only a few amendments. This modified algorithm also improves substantially the performance of basic safety management applications. The present work thus clearly confirms that both vScalable-TCP and vLP-TCP algorithms – the prefix ‘v’ to the names standing for ‘vehicular’ – outperform the existing unadorned TCP-Scalable and TCP-LP algorithms, in terms of standard performance metrics, while at the same time behaving in a friendly manner, by way of sharing bandwidth non-intrusively with DSRC safety applications. This paves the way for the smooth and harmonious coexistence of these two broad, clearly incompatible or complementary categories of applications – viz. time-sensitive safety applications and delay-tolerant infotainment applications – by narrowing down their apparent impedance or behavioural mismatch, when they are coerced to go hand in hand in a DSRC environment

    Real world evaluation of techniques for mitigating the impact of packet losses on TCP performance

    Get PDF
    The real-world impact of network losses on the performance of Transmission Control Protocol (TCP), the dominant transport protocol used for Internet data transfer, is not well understood. A detailed understanding of this impact and the efficiency of TCP in dealing with losses would prove useful for optimizing TCP design. Past work in this area is limited in its accuracy, depth of analysis, and scale. In this dissertation, we make three main contributions to address these issues: (i) design a methodology for in-depth and accurate passive analysis of TCP traces, (ii) systematically evaluate the impact of design parameters associated with TCP loss detection/recovery mechanisms on its performance, and (iii) systematically evaluate the ability of Delay Based Congestion Estimators (DBCEs) to predict losses and help avoid them. We develop a passive analysis tool, TCPdebug, which accurately tracks TCP sender state for many prominent OSes (Windows, Linux, Solaris, and FreeBSD/MacOS) and accurately classifies segments that appear out-of-sequence in a TCP trace. This tool has been extensively validated using controlled lab experiments as well as against real Internet connections. Its accuracy exceeds 99%, which is double the accuracy of current loss classification tools. Using TCPdebug, we analyze traces of more than 2.8 million Internet connections to study the efficiency of current TCP loss detection/recovery mechanisms. Using models to capture the impact of configuration of these mechanisms on the durations of TCP connections, we find that the recommended as well as widely implemented configurations for these mechanisms are fairly sub-optimal. Our analysis suggests that the durations of up to 40% of Internet connections can be reduced by more than 10% by reconfiguring prominent TCP stacks. Finally, we investigate the ability of several popular Delay Based Connection Estimators (DBCEs) to predict (and help avoid) losses using estimates of network queuing delay. We find that aggressive predictors work much better than conservative predictors. We also study the impact of connection characteristics--such as packet loss rate, flight size, and throughput--on the performance of a DBCE. We find that high-throughput connections benefit the most from any DBCE. This indicates that DBCEs hold significant promise for future high-speed networks

    Design and Analysis of a Novel Split and Aggregated Transmission Control Protocol for Smart Metering Infrastructure

    Get PDF
    Utility companies (electricity, gas, and water suppliers), governments, and researchers recognize an urgent need to deploy communication-based systems to automate data collection from smart meters and sensors, known as Smart Metering Infrastructure (SMI) or Automatic Meter Reading (AMR). A smart metering system is envisaged to bring tremendous benefits to customers, utilities, and governments. The advantages include reducing peak demand for energy, supporting the time-of-use concept for billing, enabling customers to make informed decisions, and performing effective load management, to name a few. A key element in an SMI is communications between meters and utility servers. However, the mass deployment of metering devices in the grid calls for studying the scalability of communication protocols. SMI is characterized by the deployment of a large number of small Internet Protocol (IP) devices sending small packets at a low rate to a central server. Although the individual devices generate data at a low rate, the collective traffic produced is significant and is disruptive to network communication functionality. This research work focuses on the scalability of the transport layer functionalities. The TCP congestion control mechanism, in particular, would be ineffective for the traffic of smart meters because a large volume of data comes from a large number of individual sources. This situation makes the TCP congestion control mechanism unable to lower the transmission rate even when congestion occurs. The consequences are a high loss rate for metered data and degraded throughput for competing traffic in the smart metering network. To enhance the performance of TCP in a smart metering infrastructure (SMI), we introduce a novel TCP-based scheme, called Split- and Aggregated-TCP (SA-TCP). This scheme is based on the idea of upgrading intermediate devices in SMI (known in the industry as regional collectors) to offer the service of aggregating the TCP connections. An SA-TCP aggregator collects data packets from the smart meters of its region over separate TCP connections; then it reliably forwards the data over another TCP connection to the utility server. The proposed split and aggregated scheme provides a better response to traffic conditions and, most importantly, makes the TCP congestion control and flow control mechanisms effective. Supported by extensive ns-2 simulations, we show the effectiveness of the SA-TCP approach to mitigating the problems in terms of the throughput and packet loss rate performance metrics. A full mathematical model of SA-TCP is provided. The model is highly accurate and flexible in predicting the behaviour of the two stages, separately and combined, of the SA-TCP scheme in terms of throughput, packet loss rate and end-to-end delay. Considering the two stages of the scheme, the modelling approach uses Markovian models to represent smart meters in the first stage and SA-TCP aggregators in the second. Then, the approach studies the interaction of smart meters and SA-TCP aggregators with the network by means of standard queuing models. The ns-2 simulations validate the math model results. A comprehensive performance analysis of the SA-TCP scheme is performed. It studies the impact of varying various parameters on the scheme, including the impact of network link capacity, buffering capacity of those RCs that act as SA-TCP aggregators, propagation delay between the meters and the utility server, and finally, the number of SA-TCP aggregators. The performance results show that adjusting those parameters makes it possible to further enhance congestion control in SMI. Therefore, this thesis also formulates an optimization model to achieve better TCP performance and ensures satisfactory performance results, such as a minimal loss rate and acceptable end-to-end delay. The optimization model also considers minimizing the SA-TCP scheme deployment cost by balancing the number of SA-TCP aggregators and the link bandwidth, while still satisfying performance requirements

    Improved congestion control for packet switched data networks and the Internet

    Get PDF
    Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. This thesis is a step in the direction of improved network congestion control. Traditionally the Internet has adopted a best effort policy while relying on an end-to-end mechanism. Complex functions are implemented by end users, keeping the core routers of network simple and scalable. This policy also helps in updating the software at the users' end. Thus, currently most of the functionality of the current Internet lie within the end users' protocols, particularly within Transmission Control Protocol (TCP). This strategy has worked fine to date, but networks have evolved and the traffic volume has increased many fold; hence routers need to be involved in controlling traffic, particularly during periods of congestion. Other benefits of using routers to control the flow of traffic would be facilitating the introduction of differentiated services or offering different qualities of service to different users. Any real congestion episode due to demand of greater than available bandwidth, or congestion created on a particular target host by computer viruses, will hamper the smooth execution of the offered network services. Thus, the role of congestion control mechanisms in modern computer networks is very crucial. In order to find effective solutions to congestion control, in this thesis we use feedback control system models of computer networks. The closed loop formed by TCPIIP between the end hosts, through intermediate routers, relies on implicit feedback of congestion information through returning acknowledgements. This feedback information about the congestion state of the network can be in the form of lost packets, changes in round trip time and rate of arrival of acknowledgements. Thus, end hosts can either execute reactive or proactive congestion control mechanisms. The former approach uses duplicate acknowledgements and timeouts as congestion signals, as done in TCP Reno, whereas the latter approach depends on changes in the round trip time, as in TCP Vegas. The protocols employing the second approach are still in their infancy as they cannot co-exist safely with protocols employing the first approach. Whereas TCP Reno and its mutations, such as TCP Sack, are presently widely used in computer networks, including the current Internet. These protocols require packet losses to happen before they can detect congestion, thus inherently leading to wastage of time and network bandwidth. Active Queue Management (AQM) is an alternative approach which provides congestion feedback from routers to end users. It makes a network to behave as a sensitive closed loop feedback control system, with a response time of one round trip time, congestion information being delivered to the end host to reduce data sending rates before actual packets losses happen. From this congestion information, end hosts can reduce their congestion window size, thus pumping fewer packets into a congested network until the congestion period is over and routers stop sending congestion signals. Keeping both approaches in view, we have adopted a two-pronged strategy to address the problem of congestion control. They are to adapt the network at its edges as well as its core routers. We begin by introducing TCPIIP based computer networks and defining the congestion control problem. Next we look at different proactive end-to-end protocols, including TCP Vegas due to its better fairness properties. We address the incompatibility problem between TCP Vegas and TCP Reno by using ECN based on Random Early Detection (RED) algorithm to adjust parameters of TCP Vegas. Further, we develop two alternative algorithms, namely optimal minimum variance and generalized optimal minimum variance, for fair end-to-end protocols. The relationship between (p, 1) proportionally fair algorithm and the generalized algorithm is investigated along with conditions for its stable operation. Noteworthy is a novel treatment of the issue of transient fairness. This represents the work done on congestion control at the edges of network. Next, we focus on router-based congestion control algorithms and start with a survey of previous work done in that direction. We select the RED algorithm for further work due to it being recommended for the implementation of AQM. First we devise a new Hybrid RED algorithm which employs instantaneous queue size along with an exponential weighted moving average queue size for making decisions about packet marking/dropping, and adjusts the average value during periods of low traffic. This algorithm improves the link utilization and packet loss rate as compared to basic RED. We further propose a control theory based Auto-tuning RED algorithm that adapts to changing traffic load. This algorithm can clamp the average queue size to a desired reference value which can be used to estimate queuing delays for Quality of Service purposes. As an alternative approach to router-based congestion control, we investigate Proportional, Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) principles based control algorithms for AQM. New control-theoretic RED and frequency response based PI and PID control algorithms are developed and their performance is compared with that of existing algorithms. Later we transform the RED and PI principle based algorithms into their adaptive versions using the well known square root of p formula. The performance of these load adaptive algorithms is compared with that of the previously developed fixed parameter algorithms. Apart from some recent research, most of the previous efforts on the design of congestion control algorithms have been heuristic. This thesis provides an effective use of control theory principles in the design of congestion control algorithms. We develop fixed-parameter-type feedback congestion control algorithms as well as their adaptive versions. All of the newly proposed algorithms are evaluated by using ns-based simulations. The thesis concludes with a number of research proposals emanating from the work reported

    Second Workshop on Practical Use of Coloured Petri Nets and Design/CPN.

    Get PDF
    This report contains the proceedings of the Second Workshop on Practical Use of Coloured Petri Nets and Design/CPN, October 13-15, 1999. The workshop was organised by the CPN group at the Department of Computer Science at the University of Aarhus, Denmark. The individual papers are available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop99

    Non-convex resource allocation in communication networks

    Get PDF
    The continuously growing number of applications competing for resources in current communication networks highlights the necessity for efficient resource allocation mechanisms to maximize user satisfaction. Optimization Theory can provide the necessary tools to develop such mechanisms that will allocate network resources optimally and fairly among users. However, the resource allocation problem in current networks has characteristics that turn the respective optimization problem into a non-convex one. First, current networks very often consist of a number of wireless links, whose capacity is not constant but follows Shannon capacity formula, which is a non-convex function. Second, the majority of the traffic in current networks is generated by multimedia applications, which are non-concave functions of rate. Third, current resource allocation methods follow the (bandwidth) proportional fairness policy, which when applied to networks shared by both concave and non-concave utilities leads to unfair resource allocations. These characteristics make current convex optimization frameworks inefficient in several aspects. This work aims to develop a non-convex optimization framework that will be able to allocate resources efficiently for non-convex resource allocation formulations. Towards this goal, a necessary and sufficient condition for the convergence of any primal-dual optimization algorithm to the optimal solution is proven. The wide applicability of this condition makes this a fundamental contribution for Optimization Theory in general. A number of optimization formulations are proposed, cases where this condition is not met are analysed and efficient alternative heuristics are provided to handle these cases. Furthermore, a novel multi-sigmoidal utility shape is proposed to model user satisfaction for multi-tiered multimedia applications more accurately. The advantages of such non-convex utilities and their effect in the optimization process are thoroughly examined. Alternative allocation policies are also investigated with respect to their ability to allocate resources fairly and deal with the non-convexity of the resource allocation problem. Specifically, the advantages of using Utility Proportional Fairness as an allocation policy are examined with respect to the development of distributed algorithms, their convergence to the optimal solution and their ability to adapt to the Quality of Service requirements of each application
    • …
    corecore