15,294 research outputs found

    DINC: toward distributed in-network computing

    Get PDF
    In-network computing provides significant performance benefits, load reduction, and power savings. Still, an in-network service’s functionality is strictly limited to a single hardware device. Research has focused on enabling on-device functionality, with limited consideration to distributed in-network computing. This paper explores the applicability of distributed computing to in-network computing. We present DINC, a framework enabling distributed in-network computing, generating deployment strategies, overcoming resource constraints and providing functionality guarantees across a network. It uses multi-objective optimization to provide a deployment strategy, slicing P4 programs accordingly. DINC was evaluated using seven different workloads on both data center and wide-area network topologies, demonstrating feasibility and scalability, providing efficient distribution plans within seconds

    Safe Concurrency Introduction through Slicing

    Get PDF
    Traditional refactoring is about modifying the structure of existing code without changing its behaviour, but with the aim of making code easier to understand, modify, or reuse. In this paper, we introduce three novel refactorings for retrofitting concurrency to Erlang applications, and demonstrate how the use of program slicing makes the automation of these refactorings possible

    Enhancing the performance of Decoupled Software Pipeline through Backward Slicing

    Get PDF
    The rapidly increasing number of cores available in multicore processors does not necessarily lead directly to a commensurate increase in performance: programs written in conventional languages, such as C, need careful restructuring, preferably automatically, before the benefits can be observed in improved run-times. Even then, much depends upon the intrinsic capacity of the original program for concurrent execution. The subject of this paper is the performance gains from the combined effect of the complementary techniques of the Decoupled Software Pipeline (DSWP) and (backward) slicing. DSWP extracts threadlevel parallelism from the body of a loop by breaking it into stages which are then executed pipeline style: in effect cutting across the control chain. Slicing, on the other hand, cuts the program along the control chain, teasing out finer threads that depend on different variables (or locations). parts that depend on different variables. The main contribution of this paper is to demonstrate that the application of DSWP, followed by slicing offers notable improvements over DSWP alone, especially when there is a loop-carried dependence that prevents the application of the simpler DOALL optimization. Experimental results show an improvement of a factor of ?1.6 for DSWP + slicing over DSWP alone and a factor of ?2.4 for DSWP + slicing over the original sequential code

    Slicing based code parallelization for minimizing inter-processor communication

    Get PDF
    One of the critical problems in distributed memory multi-core architectures is scalable parallelization that minimizes inter-processor communication. Using the concept of iteration space slicing, this paper presents a new code parallelization scheme for data-intensive applications. This scheme targets distributed memory multi-core architectures, and formulates the problem of data-computation distribution (partitioning) across parallel processors using slicing such that, starting with the partitioning of the output arrays, it iteratively determines the partitions of other arrays as well as iteration spaces of the loop nests in the application code. The goal is to minimize inter-processor data communications. Based on this iteration space slicing based formulation of the problem, we also propose a solution scheme. The proposed data-computation scheme is evaluated using six data-intensive benchmark programs. In our experimental evaluation, we also compare this scheme against three alternate data-computation distribution schemes. The results obtained are very encouraging, indicating around 10% better speedup, with 16 processors, over the next-best scheme when averaged over all benchmark codes we tested. Copyright 2009 ACM

    Enabling multicast slices in edge networks

    Get PDF
    Telecommunication networks are undergoing a disruptive transition towards distributed mobile edge networks with virtualized network functions (VNFs) (e.g., firewalls, Intrusion Detection Systems (IDSs), and transcoders) within the proximity of users. This transition will enable network services, especially IoT applications, to be provisioned as network slices with sequences of VNFs, in order to guarantee the performance and security of their continuous data and control flows. In this paper we study the problems of delay-aware network slicing for multicasting traffic of IoT applications in edge networks. We first propose exact solutions by formulating the problems into Integer Linear Programs (ILPs). We further devise an approximation algorithm with an approximation ratio for the problem of delay-aware network slicing for a single multicast slice, with the objective to minimize the implementation cost of the network slice subject to its delay requirement constraint. Given multiple multicast slicing requests, we also propose an efficient heuristic that admits as many user requests as possible, through exploring the impact of a non-trivial interplay of the total computing resource demand and delay requirements. We then investigate the problem of delay-oriented network slicing with given levels of delay guarantees, considering that different types of IoT applications have different levels of delay requirements, for which we propose an efficient heuristic based on Reinforcement Learning (RL). We finally evaluate the performance of the proposed algorithms through both simulations and implementations in a real test-bed. Experimental results demonstrate that the proposed algorithms is promising
    • …
    corecore