248 research outputs found

    A Packet Dropping Mechanism for Efficient Operation of M/M/1 Queues with Selfish Users

    Full text link
    We consider a fundamental game theoretic problem concerning selfish users contributing packets to an M/M/1 queue. In this game, each user controls its own input rate so as to optimize a desired tradeoff between throughput and delay. We first show that the original game has an inefficient Nash Equilibrium (NE), with a Price of Anarchy (PoA) that scales linearly or worse in the number of users. In order to improve the outcome efficiency, we propose an easily implementable mechanism design whereby the server randomly drops packets with a probability that is a function of the total arrival rate. We show that this results in a modified M/M/1 queueing game that is an ordinal potential game with at least one NE. In particular, for a linear packet dropping function, which is similar to the Random Early Detection (RED) algorithm used in Internet Congestion Control, we prove that there is a unique NE. We also show that the simple best response dynamic converges to this unique equilibrium. Finally, for this scheme, we prove that the social welfare (expressed either as the summation of utilities of all players, or as the summation of the logarithm of utilities of all players) at the equilibrium point can be arbitrarily close to the social welfare at the global optimal point, i.e. the PoA can be made arbitrarily close to 1. We also study the impact of arrival rate estimation error on the PoA through simulations.Comment: This work is an extended version of the conference paper: Y. Gai, H. Liu and B. Krishnamachari, "A packet dropping-based incentive mechanism for M/M/1 queues with selfish users", the 30th IEEE International Conference on Computer Communications (IEEE INFOCOM 2011), China, April, 201

    Energy-efficient wireless communication

    Get PDF
    In this chapter we present an energy-efficient highly adaptive network interface architecture and a novel data link layer protocol for wireless networks that provides Quality of Service (QoS) support for diverse traffic types. Due to the dynamic nature of wireless networks, adaptations in bandwidth scheduling and error control are necessary to achieve energy efficiency and an acceptable quality of service. In our approach we apply adaptability through all layers of the protocol stack, and provide feedback to the applications. In this way the applications can adapt the data streams, and the network protocols can adapt the communication parameters

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Load shedding in network monitoring applications

    Get PDF
    Monitoring and mining real-time network data streams are crucial operations for managing and operating data networks. The information that network operators desire to extract from the network traffic is of different size, granularity and accuracy depending on the measurement task (e.g., relevant data for capacity planning and intrusion detection are very different). To satisfy these different demands, a new class of monitoring systems is emerging to handle multiple and arbitrary monitoring applications. Such systems must inevitably cope with the effects of continuous overload situations due to the large volumes, high data rates and bursty nature of the network traffic. These overload situations can severely compromise the accuracy and effectiveness of monitoring systems, when their results are most valuable to network operators. In this thesis, we propose a technique called load shedding as an effective and low-cost alternative to over-provisioning in network monitoring systems. It allows these systems to handle efficiently overload situations in the presence of multiple, arbitrary and competing monitoring applications. We present the design and evaluation of a predictive load shedding scheme that can shed excess load in front of extreme traffic conditions and maintain the accuracy of the monitoring applications within bounds defined by end users, while assuring a fair allocation of computing resources to non-cooperative applications. The main novelty of our scheme is that it considers monitoring applications as black boxes, with arbitrary (and highly variable) input traffic and processing cost. Without any explicit knowledge of the application internals, the proposed scheme extracts a set of features from the traffic streams to build an on-line prediction model of the resource requirements of each monitoring application, which is used to anticipate overload situations and control the overall resource usage by sampling the input packet streams. This way, the monitoring system preserves a high degree of flexibility, increasing the range of applications and network scenarios where it can be used. Since not all monitoring applications are robust against sampling, we then extend our load shedding scheme to support custom load shedding methods defined by end users, in order to provide a generic solution for arbitrary monitoring applications. Our scheme allows the monitoring system to safely delegate the task of shedding excess load to the applications and still guarantee fairness of service with non-cooperative users. We implemented our load shedding scheme in an existing network monitoring system and deployed it in a research ISP network. We present experimental evidence of the performance and robustness of our system with several concurrent monitoring applications during long-lived executions and using real-world traffic traces.Postprint (published version

    Systems-compatible Incentives

    Get PDF
    Originally, the Internet was a technological playground, a collaborative endeavor among researchers who shared the common goal of achieving communication. Self-interest used not to be a concern, but the motivations of the Internet's participants have broadened. Today, the Internet consists of millions of commercial entities and nearly 2 billion users, who often have conflicting goals. For example, while Facebook gives users the illusion of access control, users do not have the ability to control how the personal data they upload is shared or sold by Facebook. Even in BitTorrent, where all users seemingly have the same motivation of downloading a file as quickly as possible, users can subvert the protocol to download more quickly without giving their fair share. These examples demonstrate that protocols that are merely technologically proficient are not enough. Successful networked systems must account for potentially competing interests. In this dissertation, I demonstrate how to build systems that give users incentives to follow the systems' protocols. To achieve incentive-compatible systems, I apply mechanisms from game theory and auction theory to protocol design. This approach has been considered in prior literature, but unfortunately has resulted in few real, deployed systems with incentives to cooperate. I identify the primary challenge in applying mechanism design and game theory to large-scale systems: the goals and assumptions of economic mechanisms often do not match those of networked systems. For example, while auction theory may assume a centralized clearing house, there is no analog in a decentralized system seeking to avoid single points of failure or centralized policies. Similarly, game theory often assumes that each player is able to observe everyone else's actions, or at the very least know how many other players there are, but maintaining perfect system-wide information is impossible in most systems. In other words, not all incentive mechanisms are systems-compatible. The main contribution of this dissertation is the design, implementation, and evaluation of various systems-compatible incentive mechanisms and their application to a wide range of deployable systems. These systems include BitTorrent, which is used to distribute a large file to a large number of downloaders, PeerWise, which leverages user cooperation to achieve lower latencies in Internet routing, and Hoodnets, a new system I present that allows users to share their cellular data access to obtain greater bandwidth on their mobile devices. Each of these systems represents a different point in the design space of systems-compatible incentives. Taken together, along with their implementations and evaluations, these systems demonstrate that systems-compatibility is crucial in achieving practical incentives in real systems. I present design principles outlining how to achieve systems-compatible incentives, which may serve an even broader range of systems than considered herein. I conclude this dissertation with what I consider to be the most important open problems in aligning the competing interests of the Internet's participants

    Heterogeneous Congestion Control: Efficiency, Fairness and Design

    Get PDF
    When heterogeneous congestion control protocols that react to different pricing signals (e.g. packet loss, queueing delay, ECN marking etc.) share the same network, the current theory based on utility maximization fails to predict the network behavior. Unlike in a homogeneous network, the bandwidth allocation now depends on router parameters and flow arrival patterns. It can be non-unique, inefficient and unfair. This paper has two objectives. First, we demonstrate the intricate behaviors of a heterogeneous network through simulations and present a rigorous framework to help understand its equilibrium efficiency and fairness properties. By identifying an optimization problem associated with every equilibrium, we show that every equilibrium is Pareto efficient and provide an upper bound on efficiency loss due to pricing heterogeneity. On fairness, we show that intra-protocol fairness is still decided by a utility maximization problem while inter-protocol fairness is the part over which we don¿t have control. However it is shown that we can achieve any desirable inter-protocol fairness by properly choosing protocol parameters. Second, we propose a simple slow timescale source-based algorithm to decouple bandwidth allocation from router parameters and flow arrival patterns and prove its feasibility. The scheme needs only local information

    Dual-Mode Congestion Control Mechanism for Video Services

    Get PDF
    Recent studies have shown that video services represent over half of Internet traffic, with a growing trend. Therefore, video traffic plays a major role in network congestion. Currently on the Internet, congestion control is mainly implemented through overprovisioning and TCP congestion control. Although some video services use TCP to implement their transport services in a manner that actually works, TCP is not an ideal protocol for use by all video applications. For example, UDP is often considered to be more suitable for use by real-time video applications. Unfortunately, UDP does not implement congestion control. Therefore, these UDP-based video services operate without any kind of congestion control support unless congestion control is implemented on the application layer. There are also arguments against massive overprovisioning. Due to these factors, there is still a need to equip video services with proper congestion control.Most of the congestion control mechanisms developed for the use of video services can only offer either low priority or TCP-friendly real-time services. There is no single congestion control mechanism currently that is suitable and can be widely used for all kinds of video services. This thesis provides a study in which a new dual-mode congestion control mechanism is proposed. This mechanism can offer congestion control services for both service types. The mechanism includes two modes, a backward-loading mode and a real-time mode. The backward-loading mode works like a low-priority service where the bandwidth is given away to other connections once the load level of a network is high enough. In contrast, the real-time mode always demands its fair share of the bandwidth.The behavior of the new mechanism and its friendliness toward itself, and the TCP protocol, have been investigated by means of simulations and real network tests. It was found that this kind of congestion control approach could be suitable for video services. The new mechanism worked acceptably. In particular, the mechanism behaved toward itself in a very friendly way in most cases. The averaged TCP fairness was at a good level. In the worst cases, the faster connections received about 1.6 times as much bandwidth as the slower connections

    Intervention in Power Control Games With Selfish Users

    Full text link
    We study the power control problem in wireless ad hoc networks with selfish users. Without incentive schemes, selfish users tend to transmit at their maximum power levels, causing significant interference to each other. In this paper, we study a class of incentive schemes based on intervention to induce selfish users to transmit at desired power levels. An intervention scheme can be implemented by introducing an intervention device that can monitor the power levels of users and then transmit power to cause interference to users. We mainly consider first-order intervention rules based on individual transmit powers. We derive conditions on design parameters and the intervention capability to achieve a desired outcome as a (unique) Nash equilibrium and propose a dynamic adjustment process that the designer can use to guide users and the intervention device to the desired outcome. The effect of using intervention rules based on aggregate receive power is also analyzed. Our results show that with perfect monitoring intervention schemes can be designed to achieve any positive power profile while using interference from the intervention device only as a threat. We also analyze the case of imperfect monitoring and show that a performance loss can occur. Lastly, simulation results are presented to illustrate the performance improvement from using intervention rules and compare the performances of different intervention rules.Comment: 33 pages, 6 figure

    FAST Copper for Broadband Access

    Get PDF
    FAST Copper is a multi-year, U.S. NSF funded project that started in 2004, and is jointly pursued by the research groups of Mung Chiang at Princeton University, John Cioffi at Stanford University, and Alexander Fraser at Fraser Research Lab, and in collaboration with several industrial partners including AT&T. The goal of the FAST Copper Project is to provide ubiquitous, 100 Mbps, fiber/DSL broadband access to everyone in the US with a phone line. This goal will be achieved through two threads of research: dynamic and joint optimization of resources in Frequency, Amplitude, Space, and Time (thus the name 'FAST') to overcome the attenuation and crosstalk bottlenecks, and the integration of communication, networking, computation, modeling, and distributed information management and control for the multi-user twisted pair network
    • …
    corecore