36 research outputs found

    Using Internet Geometry to Improve End-to-End Communication Performance

    Get PDF
    The Internet has been designed as a best-effort communication medium between its users, providing connectivity but optimizing little else. It does not guarantee good paths between two users: packets may take longer or more congested routes than necessary, they may be delayed by slow reaction to failures, there may even be no path between users. To obtain better paths, users can form routing overlay networks, which improve the performance of packet delivery by forwarding packets along links in self-constructed graphs. Routing overlays delegate the task of selecting paths to users, who can choose among a diversity of routes which are more reliable, less loaded, shorter or have higher bandwidth than those chosen by the underlying infrastructure. Although they offer improved communication performance, existing routing overlay networks are neither scalable nor fair: the cost of measuring and computing path performance metrics between participants is high (which limits the number of participants) and they lack robustness to misbehavior and selfishness (which could discourage the participation of nodes that are more likely to offer than to receive service). In this dissertation, I focus on finding low-latency paths using routing overlay networks. I support the following thesis: it is possible to make end-to-end communication between Internet users simultaneously faster, scalable, and fair, by relying solely on inherent properties of the Internet latency space. To prove this thesis, I take two complementary approaches. First, I perform an extensive measurement study in which I analyze, using real latency data sets, properties of the Internet latency space: the existence of triangle inequality violations (TIVs) (which expose detour paths: ''indirect'' one-hop paths that have lower round-trip latency than the ''direct'' default paths), the interaction between TIVs and network coordinate systems (which leads to scalable detour discovery), and the presence of mutual advantage (which makes fairness possible). Then, using the results of the measurement study, I design and build PeerWise, the first routing overlay network that reduces end-to-end latency between its participants and is both scalable and fair. I evaluate PeerWise using simulation and through a wide-area deployment on the PlanetLab testbed

    Modeling Tiered Pricing in the Internet Transit Market

    Full text link
    ISPs are increasingly selling "tiered" contracts, which offer Internet connectivity to wholesale customers in bundles, at rates based on the cost of the links that the traffic in the bundle is traversing. Although providers have already begun to implement and deploy tiered pricing contracts, little is known about how such pricing affects ISPs and their customers. While contracts that sell connectivity on finer granularities improve market efficiency, they are also more costly for ISPs to implement and more difficult for customers to understand. In this work we present two contributions: (1) we develop a novel way of mapping traffic and topology data to a demand and cost model; and (2) we fit this model on three large real-world networks: an European transit ISP, a content distribution network, and an academic research network, and run counterfactuals to evaluate the effects of different pricing strategies on both the ISP profit and the consumer surplus. We highlight three core findings. First, ISPs gain most of the profits with only three or four pricing tiers and likely have little incentive to increase granularity of pricing even further. Second, we show that consumer surplus follows closely, if not precisely, the increases in ISP profit with more pricing tiers. Finally, the common ISP practice of structuring tiered contracts according to the cost of carrying the traffic flows (e.g., offering a discount for traffic that is local) can be suboptimal and that dividing contracts based on both traffic demand and the cost of carrying it into only three or four tiers yields near-optimal profit for the ISP

    The Case for Microcontracts for Internet Connectivity

    Get PDF
    This paper introduces microcontracts, which are contracts for "slices" of the Internet connectivity along dimensions such as time, destination, volume, and application type. Microcontracts are motivated by the observation that Internet service providers carry traffic for different classes of customers that use the ISP's resources in a variety of different ways and, hence, impose different costs on the ISPs. For example, customers have little incentive to move less important traffic from a peak time interval unless their contract reflects the ISP's costs in that time interval. To address this inefficiency, microcontracts divide connectivity into fine-grained units so that prices more directly reflect the costs that the ISP bears for delivering the connectivity at that time. We explore the feasibility of applying microcontracts in realistic Internet service provider settings by characterizing the traffic patterns from a transit network along two specific dimensions: time-of-day and distance travelled. We argue that microcontracts are both feasible and advantageous to both buyers and sellers of Internet connectivity. We develop a model to help ISPs derive customer demand functions from observed traffic patterns; using this model, we show that making contracts for Internet connectivity more fine-grained can improve the aggregate gain of an ISP and its customers

    Utility optimization for event-driven distributed infrastructures

    No full text
    Event-driven distributed infrastructures are becoming increasingly important for information dissemination and application integration. We examine the problem of optimal resource allocation for such an infrastructure composed of an overlay of nodes. Resources, like CPU and network bandwidth, are consumed by both message flows and message consumers; therefore, we consider both rate control for flows and admission control for consumers. This makes the optimization problem difficult because the objective function is nonconcave and the constraint set is nonconvex. We present LRGP (Lagrangian Rates, Greedy Populations), a scalable and efficient distributed algorithm to maximize the total system utility. The key insight of our solution involves partitioning the optimization problem into two types of subproblems: a greedy allocation for consumer admission control and a Lagrangian allocation to compute the flow rates, and linking the subproblems in a manner that allows tradeoffs between consumer admission and flow rates while satisfying the nonconvex constraints. LRGP allows an autonomic approach to system management where nodes collaboratively optimize aggregate system performance. We evaluate the quality of results and convergence characteristics under various workloads. 1

    Measurement manipulation and space selection in network coordinates

    No full text
    Internet coordinate systems have emerged as an efficient method to estimate the latency between pairs of nodes without any communication between them. However, most coordinate systems have been evaluated solely on data sets built by their authors from measurements gathered over large periods of time. Although they show good prediction results, it is unclear whether the accuracy is the result of the system design properties or is more connected to the characteristics of the data sets. In this paper, we revisit a simple question: how do the features of the embedding space and the inherent attributes of the data sets interact in producing good embeddings? We adapt the Vivaldi algorithm to use Hyperbolic space for embedding and evaluate both Euclidean and Hyperbolic Vivaldi on seven sets of real-world latencies. Our results show that node filtering and latency distributions can significantly influence the accuracy of the predictions. For example, although Euclidean Vivaldi performs well on data sets that were chosen, constructed and filtered by the designers of the algorithm, its performance and robustness decrease considerably when run on thirdparty data sets that were not filtered a priori. Our results offer important insight into designing and building coordinate systems that are both robust and accurate in Internetlike environments.

    Playing Vivaldi in Hyperbolic Space

    Get PDF
    Internet coordinate systems have emerged as an efficient method to estimate the latency between pairs of nodes without any communication between them. They avoid the cost of explicit measurements by placing each node in a finite coordinate space and estimating the latency between two nodes as the distance between their positions in the space. In this paper, we adapt the Vivaldi algorithm to use the Hyperbolic space for embedding. Researchers have found promise in Hyperbolic space due to its mathematical elegance and its ability to model the structure of the Internet. We attempt to combine the elegance of the Hyperbolic space with the practical, decentralized Vivaldi algorithm. We evaluate both Euclidean and Hyperbolic Vivaldi on three sets of real-world latencies. Contrary to what we expected, we find that the performance of the two versions of Vivaldi varies with each data set. Furthermore, we show that the Hyperbolic coordinates have the tendency to underestimate large latencies (> 100 ms) but behave better when estimating short distances. Finally, we propose two distributed heuristics that help nodes decide whether to choose Euclidean or Hyperbolic coordinates when estimating distances to their peers. This is the first comparison of Euclidean and Hyperbolic embeddings using the same distributed solver and using a data set with more than 200 nodes.
    corecore