9,698 research outputs found

    The State of Network Neutrality Regulation

    Get PDF
    The Network Neutrality (NN) debate refers to the battle over the design of a regulatory framework for preserving the Internet as a public network and open innovation platform. Fueled by concerns that broadband access service providers might abuse network management to discriminate against third party providers (e.g., content or application providers), policymakers have struggled with designing rules that would protect the Internet from unreasonable network management practices. In this article, we provide an overview of the history of the debate in the U.S. and the EU and highlight the challenges that will confront network engineers designing and operating networks as the debate continues to evolve.BMBF, 16DII111, Verbundprojekt: Weizenbaum-Institut für die vernetzte Gesellschaft - Das Deutsche Internet-Institut; Teilvorhaben: Wissenschaftszentrum Berlin für Sozialforschung (WZB)EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Towards a Rigorous Methodology for Measuring Adoption of RPKI Route Validation and Filtering

    Full text link
    A proposal to improve routing security---Route Origin Authorization (ROA)---has been standardized. A ROA specifies which network is allowed to announce a set of Internet destinations. While some networks now specify ROAs, little is known about whether other networks check routes they receive against these ROAs, a process known as Route Origin Validation (ROV). Which networks blindly accept invalid routes? Which reject them outright? Which de-preference them if alternatives exist? Recent analysis attempts to use uncontrolled experiments to characterize ROV adoption by comparing valid routes and invalid routes. However, we argue that gaining a solid understanding of ROV adoption is impossible using currently available data sets and techniques. Our measurements suggest that, although some ISPs are not observed using invalid routes in uncontrolled experiments, they are actually using different routes for (non-security) traffic engineering purposes, without performing ROV. We conclude with a description of a controlled, verifiable methodology for measuring ROV and present three ASes that do implement ROV, confirmed by operators

    An Internet Heartbeat

    Get PDF
    Obtaining sound inferences over remote networks via active or passive measurements is difficult. Active measurement campaigns face challenges of load, coverage, and visibility. Passive measurements require a privileged vantage point. Even networks under our own control too often remain poorly understood and hard to diagnose. As a step toward the democratization of Internet measurement, we consider the inferential power possible were the network to include a constant and predictable stream of dedicated lightweight measurement traffic. We posit an Internet "heartbeat," which nodes periodically send to random destinations, and show how aggregating heartbeats facilitates introspection into parts of the network that are today generally obtuse. We explore the design space of an Internet heartbeat, potential use cases, incentives, and paths to deployment
    corecore