791 research outputs found

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    A one-pass clustering based sketch method for network monitoring

    Get PDF
    Network monitoring solutions need to cope with increasing network traffic volumes, as a result, sketch-based monitoring methods have been extensively studied to trade accuracy for memory scalability and storage reduction. However, sketches are sensitive to skewness in network flow distributions due to hash collisions, and need complicated performance optimization to adapt to line-rate packet streams. We provide Jellyfish, an efficient sketch method that performs one-pass clustering over the network stream. One-pass clustering is realized by adapting the monitoring granularity from the whole network flow to fragments called subflows, which not only reduces the ingestion rate but also provides an efficient intermediate representation for the input to the sketch. Jellyfish provides the network-flow level query interface by reconstructing the network-flow level counters by merging subflow records from the same network flow. We provide probabilistic analysis of the expected accuracy of both existing sketch methods and Jellyfish. Real-world trace-driven experiments show that Jellyfish reduces the average estimation errors by up to six orders of magnitude for per-flow queries, by six orders of magnitude for entropy queries, and up to ten times for heavy-hitter queries.This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61972409; in part by Hong Kong Research Grants Council (RGC) under Grant TRS T41-603/20-R, Grant GRF-16213621, and Grant ITF ACCESS; in part by the Spanish I+D+i project TRAINER-A, funded by MCIN/AEI/10.13039/501100011033, under Grant PID2020-118011GB-C21; and in part by the Catalan Institution for Research and Advanced Studies (ICREA Academia).Peer ReviewedPostprint (author's final draft

    cISP: A Speed-of-Light Internet Service Provider

    Full text link
    Low latency is a requirement for a variety of interactive network applications. The Internet, however, is not optimized for latency. We thus explore the design of cost-effective wide-area networks that move data over paths very close to great-circle paths, at speeds very close to the speed of light in vacuum. Our cISP design augments the Internet's fiber with free-space wireless connectivity. cISP addresses the fundamental challenge of simultaneously providing low latency and scalable bandwidth, while accounting for numerous practical factors ranging from transmission tower availability to packet queuing. We show that instantiations of cISP across the contiguous United States and Europe would achieve mean latencies within 5% of that achievable using great-circle paths at the speed of light, over medium and long distances. Further, we estimate that the economic value from such networks would substantially exceed their expense

    Internames: a name-to-name principle for the future Internet

    Full text link
    We propose Internames, an architectural framework in which names are used to identify all entities involved in communication: contents, users, devices, logical as well as physical points involved in the communication, and services. By not having a static binding between the name of a communication entity and its current location, we allow entities to be mobile, enable them to be reached by any of a number of basic communication primitives, enable communication to span networks with different technologies and allow for disconnected operation. Furthermore, with the ability to communicate between names, the communication path can be dynamically bound to any of a number of end-points, and the end-points themselves could change as needed. A key benefit of our architecture is its ability to accommodate gradual migration from the current IP infrastructure to a future that may be a ubiquitous Information Centric Network. Basic building blocks of Internames are: i) a name-based Application Programming Interface; ii) a separation of identifiers (names) and locators; iii) a powerful Name Resolution Service (NRS) that dynamically maps names to locators, as a function of time/location/context/service; iv) a built-in capacity of evolution, allowing a transparent migration from current networks and the ability to include as particular cases current specific architectures. To achieve this vision, shared by many other researchers, we exploit and expand on Information Centric Networking principles, extending ICN functionality beyond content retrieval, easing send-to-name and push services, and allowing to use names also to route data in the return path. A key role in this architecture is played by the NRS, which allows for the co-existence of multiple network "realms", including current IP and non-IP networks, glued together by a name-to-name overarching communication primitive.Comment: 6 page

    Proxcache: A new cache deployment strategy in information-centric network for mitigating path and content redundancy

    Get PDF
    One of the promising paradigms for resource sharing with maintaining the basic Internet semantics is the Information-Centric Networking (ICN). ICN distinction with the current Internet is its ability to refer contents by names with partly dissociating the host-to-host practice of Internet Protocol addresses. Moreover, content caching in ICN is the major action of achieving content networking to reduce the amount of server access. The current caching practice in ICN using the Leave Copy Everywhere (LCE) progenerate problems of over deposition of contents known as content redundancy, path redundancy, lesser cache-hit rates in heterogeneous networks and lower content diversity. This study proposes a new cache deployment strategy referred to as ProXcache to acquire node relationships using hyperedge concept of hypergraph for cache positioning. The study formulates the relationships through the path and distance approximation to mitigate content and path redundancy. The study adopted the Design Research Methodology approach to achieve the slated research objectives. ProXcache was investigated using simulation on the Abilene, GEANT and the DTelekom network topologies for LCE and ProbCache caching strategies with the Zipf distribution to differ content categorization. The results show the overall content and path redundancy are minimized with lesser caching operation of six depositions per request as compared to nine and nineteen for ProbCache and LCE respectively. ProXcache yields better content diversity ratio of 80% against 20% and 49% for LCE and ProbCache respectively as the cache sizes varied. ProXcache also improves the cache-hit ratio through proxy positions. These thus, have significant influence in the development of the ICN for better management of contents towards subscribing to the Future Internet

    PABO: Mitigating Congestion via Packet Bounce in Data Center Networks

    Full text link
    In today's data center, a diverse mix of throughput-sensitive long flows and delay-sensitive short flows are commonly presented in shallow-buffered switches. Long flows could potentially block the transmission of delay-sensitive short flows, leading to degraded performance. Congestion can also be caused by the synchronization of multiple TCP connections for short flows, as typically seen in the partition/aggregate traffic pattern. While multiple end-to-end transport-layer solutions have been proposed, none of them have tackled the real challenge: reliable transmission in the network. In this paper, we fill this gap by presenting PABO -- a novel link-layer design that can mitigate congestion by temporarily bouncing packets to upstream switches. PABO's design fulfills the following goals: i) providing per-flow based flow control on the link layer, ii) handling transient congestion without the intervention of end devices, and iii) gradually back propagating the congestion signal to the source when the network is not capable to handle the congestion.Experiment results show that PABO can provide prominent advantage of mitigating transient congestions and can achieve significant gain on end-to-end delay

    Enforcing network policy in heterogeneous network function box environment

    Get PDF
    Data center operators deploy a variety of both physical and virtual network functions boxes (NFBs) to take advantages of inherent efficiency offered by physical NFBs with the agility and flexibility of virtual ones. However, such heterogeneity faces great challenges in correct, efficient and dynamic network policy implementation because, firstly, existing schemes are limited to exclusively physical or virtual NFBs and not a mix, and secondly, NFBs can co-exist at various locations in the network as a result of emerging technologies such as Software Defined Networking (SDN) and Network Function Virtualization (NFV). In this paper, we propose a Heterogeneous netwOrk pOlicy enforCement scheme (HOOC) to overcome these challenges. We first formulate and model HOOC, which is shown be to NP-Hard by reducing from the Multiple Knapsack Problem (MKP). We then propose an efficient online algorithm that can achieve optimal latency-wise NF service chaining amongst heterogenous NFBs. In addition, we also provide a greedy algorithm when operators prefer smaller run-time than optimality. Our simulation results show that HOOC is efficient and scalable whilst testbed implementation demonstrates that HOOC can be easily deployed in the data center environments
    • …
    corecore