1,282 research outputs found

    FLAIM: A Multi-level Anonymization Framework for Computer and Network Logs

    Full text link
    FLAIM (Framework for Log Anonymization and Information Management) addresses two important needs not well addressed by current log anonymizers. First, it is extremely modular and not tied to the specific log being anonymized. Second, it supports multi-level anonymization, allowing system administrators to make fine-grained trade-offs between information loss and privacy/security concerns. In this paper, we examine anonymization solutions to date and note the above limitations in each. We further describe how FLAIM addresses these problems, and we describe FLAIM's architecture and features in detail.Comment: 16 pages, 4 figures, in submission to USENIX Lis

    Congestion Control using FEC for Conversational Multimedia Communication

    Full text link
    In this paper, we propose a new rate control algorithm for conversational multimedia flows. In our approach, along with Real-time Transport Protocol (RTP) media packets, we propose sending redundant packets to probe for available bandwidth. These redundant packets are Forward Error Correction (FEC) encoded RTP packets. A straightforward interpretation is that if no losses occur, the sender can increase the sending rate to include the FEC bit rate, and in the case of losses due to congestion the redundant packets help in recovering the lost packets. We also show that by varying the FEC bit rate, the sender is able to conservatively or aggressively probe for available bandwidth. We evaluate our FEC-based Rate Adaptation (FBRA) algorithm in a network simulator and in the real-world and compare it to other congestion control algorithms

    Implementation and Deployment of a Distributed Network Topology Discovery Algorithm

    Full text link
    In the past few years, the network measurement community has been interested in the problem of internet topology discovery using a large number (hundreds or thousands) of measurement monitors. The standard way to obtain information about the internet topology is to use the traceroute tool from a small number of monitors. Recent papers have made the case that increasing the number of monitors will give a more accurate view of the topology. However, scaling up the number of monitors is not a trivial process. Duplication of effort close to the monitors wastes time by reexploring well-known parts of the network, and close to destinations might appear to be a distributed denial-of-service (DDoS) attack as the probes converge from a set of sources towards a given destination. In prior work, authors of this report proposed Doubletree, an algorithm for cooperative topology discovery, that reduces the load on the network, i.e., router IP interfaces and end-hosts, while discovering almost as many nodes and links as standard approaches based on traceroute. This report presents our open-source and freely downloadable implementation of Doubletree in a tool we call traceroute@home. We describe the deployment and validation of traceroute@home on the PlanetLab testbed and we report on the lessons learned from this experience. We discuss how traceroute@home can be developed further and discuss ideas for future improvements

    Traffic Management Applications for Stateful SDN Data Plane

    Get PDF
    The successful OpenFlow approach to Software Defined Networking (SDN) allows network programmability through a central controller able to orchestrate a set of dumb switches. However, the simple match/action abstraction of OpenFlow switches constrains the evolution of the forwarding rules to be fully managed by the controller. This can be particularly limiting for a number of applications that are affected by the delay of the slow control path, like traffic management applications. Some recent proposals are pushing toward an evolution of the OpenFlow abstraction to enable the evolution of forwarding policies directly in the data plane based on state machines and local events. In this paper, we present two traffic management applications that exploit a stateful data plane and their prototype implementation based on OpenState, an OpenFlow evolution that we recently proposed.Comment: 6 pages, 9 figure

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    T3P: Demystifying Low-Earth Orbit Satellite Broadband

    Full text link
    The Internet is going through a massive infrastructural revolution with the advent of low-flying satellite networks, 5/6G, WiFi7, and hollow-core fiber deployments. While these networks could unleash enhanced connectivity and new capabilities, it is critical to understand the performance characteristics to efficiently drive applications over them. Low-Earth orbit (LEO) satellite mega-constellations like SpaceX Starlink aim to offer broad coverage and low latencies at the expense of high orbital dynamics leading to continuous latency changes and frequent satellite hand-offs. This paper aims to quantify Starlink's latency and its variations and components using a real testbed spanning multiple latitudes from the North to the South of Europe. We identify tail latencies as a problem. We develop predictors for latency and throughput and show their utility in improving application performance by up to 25%. We also explore how transport protocols can be optimized for LEO networks and show that this can improve throughput by up to 115% (with only a 5% increase in latency). Also, our measurement testbed with a footprint across multiple locations offers unique trigger-based scheduling capabilities that are necessary to quantify the impact of LEO dynamics.Comment: 16 page
    • …
    corecore