106 research outputs found

    Controlling the cost of reliability in peer-to-peer overlays

    Get PDF
    Abstract-Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost

    Controlling High Bandwidth Aggregates in the Network

    Get PDF
    The current Internet infrastructure has very few built-in protection mechanisms, and is therefore vulnerable to attacks and failures. In particular, recent events have illustrated the Internet's vulnerability to both denial of service (DoS) attacks and flash crowds in which one or more links in the network (or servers at the edge of the network) become severely congested. In both DoS attacks and flash crowds the congestion is due neither to a single flow, nor to a general increase in traffic, but to a well-defined subset of the traffic --- an aggregate. This paper proposes mechanisms for detecting and controlling such high bandwidth aggregates. Our design involves both a local mechanism for detecting and controlling an aggregate at a single router, and a cooperative pushback mechanism in which a router can ask upstream routers to control an aggregate. While certainly not a panacea, these mechanisms could provide some needed relief from flash crowds and flooding-style DoS attacks. The presentation in this paper is a first step towards a more rigorous evaluation of these mechanisms

    A network-state management service

    Get PDF
    Abstract-We present Statesman, a network-state management service that allows multiple network management applications to operate independently, while maintaining network-wide safety and performance invariants. Network state captures various aspects of the network such as which links are alive and how switches are forwarding traffic. Statesman uses three views of the network state. In observed state, it maintains an up-to-date view of the actual network state. Applications read this state and propose state changes based on their individual goals. Using a model of dependencies among state variables, Statesman merges these proposed states into a target state that is guaranteed to maintain the safety and performance invariants. It then updates the network to the target state. Statesman has been deployed in ten Microsoft Azure datacenters for several months, and three distinct applications have been built on it. We use the experience from this deployment to demonstrate how Statesman enables each application to meet its goals, while maintaining network-wide invariants

    Negotiation-based Routing

    No full text
    This paper argues for an interdomain routing architecture based on dynamic negotiation between the source, intermediate, and destination ISPs. Motivation Interdomain route selection is a complex process driven by constraints arising from topology, policy (e.g., commercial relationships), traffic engineering (e.g., load balancing), and performance. BGP was designed to find (a single) policy conformant route through the network for each source, destination pair. There are two fundamental problems with the route selection process of BGP. First, edge ISPs have too few options for selecting routes. Their choices are limited to the paths selected by their provider(s). As a result, there are many valid paths in the topology that cannot be used. Customers suffer when their providers select poor (from the customer's perspective) routes, for instance, when the performance metrics for the provider and customer are different. This shortcoming is evident in trends such as increased multi-homing and the use of "intelligent routing" solutions such as those provided by RouteScience. The second problem, which is at the other extreme, is that the senders unilaterally select routes from those available to them. This ignores the traffic engineering needs of the destination and intermediate ISPs. We use the term traffic engineering to loosely refer to the process of controlling the paths through the network (driven by a high-level goal such as efficient use of the network). As a result of this route selection methodology, the destination ISP has no control over which of its upstreams gets used. Similarly, intermediate ISPs have no control over whether the upstream ISP would use the exported route, and if yes, for how much traffic. Over time hooks such as MEDs, communities, and extended communities have been added to the protocol to address this shortcoming. However, these hooks are both insufficient (e.g., ISPs have limited control over incoming paths) and have an unpredictable impact (e.g., MEDs can lead to persistent oscillations The solution to the first problem is to give more routing choices to the edge ISPs. The solution to the second problem is getting the intermediate and destinations ISPs involved in the route selection process such that the selected route suites all the ISPs. This route negotiation should be explicit and transparent, and its outcome predictable. Otherwise, the result may be a situation much like the current world in which ISPs try to manipulate the outcome to their benefit by trying to second-guess the actions of others. It is important to address both the problems together. Solving only the first exacerbates the second problem -it would be even harder for downstream ISPs to manage and provision their network, and may also have an impact on the overall stability of the Internet. Similarly, solving only the second exacerbates the first by further limiting the choices at the source ISP (downstream ISPs can reject certain routes). Architecture We argue that a better routing architecture can be designed by proposing the following strawman

    How to build a research system in your spare time

    No full text

    Inferring Link Weights using End-to-End Measurements

    No full text
    We describe a novel constraint-based approach to approximate ISP link weights using only end-to-end measurements. Common routing protocols such as OSPF and IS-IS choose least-cost paths using link weights, so inferred weights provide a simple, concise, and useful model of intradomain routing. Our approach extends router-level ISP maps, which include only connectivity, with link weights that are consistent with routing. Our inferred weights agree well with observed routing: while our inferred weights fully characterize the set of shortest paths between 84-99% of the router-pairs, alternative models based on hop count and latency do so for only 47-81% of the pairs
    corecore