1,419 research outputs found

    POEM: Pricing Longer for Edge Computing in the Device Cloud

    Full text link
    Multiple access mobile edge computing has been proposed as a promising technology to bring computation services close to end users, by making good use of edge cloud servers. In mobile device clouds (MDC), idle end devices may act as edge servers to offer computation services for busy end devices. Most existing auction based incentive mechanisms in MDC focus on only one round auction without considering the time correlation. Moreover, although existing single round auctions can also be used for multiple times, users should trade with higher bids to get more resources in the cascading rounds of auctions, then their budgets will run out too early to participate in the next auction, leading to auction failures and the whole benefit may suffer. In this paper, we formulate the computation offloading problem as a social welfare optimization problem with given budgets of mobile devices, and consider pricing longer of mobile devices. This problem is a multiple-choice multi-dimensional 0-1 knapsack problem, which is a NP-hard problem. We propose an auction framework named MAFL for long-term benefits that runs a single round resource auction in each round. Extensive simulation results show that the proposed auction mechanism outperforms the single round by about 55.6% on the revenue on average and MAFL outperforms existing double auction by about 68.6% in terms of the revenue.Comment: 8 pages, 1 figure, Accepted by the 18th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    Enabling heterogeneous network function chaining

    Get PDF
    Today's data center operators deploy network policies in both physical (e.g., middleboxes, switches) and virtualized (e.g., virtual machines on general purpose servers) network function boxes (NFBs), which reside in different points of the network, to exploit their efficiency and agility respectively. Nevertheless, such heterogeneity has resulted in a great number of independent network nodes that can dynamically generate and implement inconsistent and conflicting network policies, making correct policy implementation a difficult problem to solve. Since these nodes have varying capabilities, services running atop are also faced with profound performance unpredictability. In this paper, we propose a Heterogeneous netwOrk Policy Enforcement (HOPE) scheme to overcome these challenges. HOPE guarantees that network functions (NFs) that implement a policy chain are optimally placed onto heterogeneous NFBs such that the network cost of the policy is minimized. We first experimentally demonstrate that the processing capacity of NFBs is the dominant performance factor. This observation is then used to formulate the Heterogeneous Network Policy Placement problem, which is shown to be NP-Hard. To solve the problem efficiently, an online algorithm is proposed. Our experimental results demonstrate that HOPE achieves the same optimality as Branch-and-bound optimization but is 3 orders of magnitude more efficient

    A one-pass clustering based sketch method for network monitoring

    Get PDF
    Network monitoring solutions need to cope with increasing network traffic volumes, as a result, sketch-based monitoring methods have been extensively studied to trade accuracy for memory scalability and storage reduction. However, sketches are sensitive to skewness in network flow distributions due to hash collisions, and need complicated performance optimization to adapt to line-rate packet streams. We provide Jellyfish, an efficient sketch method that performs one-pass clustering over the network stream. One-pass clustering is realized by adapting the monitoring granularity from the whole network flow to fragments called subflows, which not only reduces the ingestion rate but also provides an efficient intermediate representation for the input to the sketch. Jellyfish provides the network-flow level query interface by reconstructing the network-flow level counters by merging subflow records from the same network flow. We provide probabilistic analysis of the expected accuracy of both existing sketch methods and Jellyfish. Real-world trace-driven experiments show that Jellyfish reduces the average estimation errors by up to six orders of magnitude for per-flow queries, by six orders of magnitude for entropy queries, and up to ten times for heavy-hitter queries.This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61972409; in part by Hong Kong Research Grants Council (RGC) under Grant TRS T41-603/20-R, Grant GRF-16213621, and Grant ITF ACCESS; in part by the Spanish I+D+i project TRAINER-A, funded by MCIN/AEI/10.13039/501100011033, under Grant PID2020-118011GB-C21; and in part by the Catalan Institution for Research and Advanced Studies (ICREA Academia).Peer ReviewedPostprint (author's final draft

    Disaster-Resilient Control Plane Design and Mapping in Software-Defined Networks

    Full text link
    Communication networks, such as core optical networks, heavily depend on their physical infrastructure, and hence they are vulnerable to man-made disasters, such as Electromagnetic Pulse (EMP) or Weapons of Mass Destruction (WMD) attacks, as well as to natural disasters. Large-scale disasters may cause huge data loss and connectivity disruption in these networks. As our dependence on network services increases, the need for novel survivability methods to mitigate the effects of disasters on communication networks becomes a major concern. Software-Defined Networking (SDN), by centralizing control logic and separating it from physical equipment, facilitates network programmability and opens up new ways to design disaster-resilient networks. On the other hand, to fully exploit the potential of SDN, along with data-plane survivability, we also need to design the control plane to be resilient enough to survive network failures caused by disasters. Several distributed SDN controller architectures have been proposed to mitigate the risks of overload and failure, but they are optimized for limited faults without addressing the extent of large-scale disaster failures. For disaster resiliency of the control plane, we propose to design it as a virtual network, which can be solved using Virtual Network Mapping techniques. We select appropriate mapping of the controllers over the physical network such that the connectivity among the controllers (controller-to-controller) and between the switches to the controllers (switch-to-controllers) is not compromised by physical infrastructure failures caused by disasters. We formally model this disaster-aware control-plane design and mapping problem, and demonstrate a significant reduction in the disruption of controller-to-controller and switch-to-controller communication channels using our approach.Comment: 6 page

    cISP: A Speed-of-Light Internet Service Provider

    Full text link
    Low latency is a requirement for a variety of interactive network applications. The Internet, however, is not optimized for latency. We thus explore the design of cost-effective wide-area networks that move data over paths very close to great-circle paths, at speeds very close to the speed of light in vacuum. Our cISP design augments the Internet's fiber with free-space wireless connectivity. cISP addresses the fundamental challenge of simultaneously providing low latency and scalable bandwidth, while accounting for numerous practical factors ranging from transmission tower availability to packet queuing. We show that instantiations of cISP across the contiguous United States and Europe would achieve mean latencies within 5% of that achievable using great-circle paths at the speed of light, over medium and long distances. Further, we estimate that the economic value from such networks would substantially exceed their expense
    • …
    corecore