115 research outputs found

    Don't Repeat Yourself: Seamless Execution and Analysis of Extensive Network Experiments

    Full text link
    This paper presents MACI, the first bespoke framework for the management, the scalable execution, and the interactive analysis of a large number of network experiments. Driven by the desire to avoid repetitive implementation of just a few scripts for the execution and analysis of experiments, MACI emerged as a generic framework for network experiments that significantly increases efficiency and ensures reproducibility. To this end, MACI incorporates and integrates established simulators and analysis tools to foster rapid but systematic network experiments. We found MACI indispensable in all phases of the research and development process of various communication systems, such as i) an extensive DASH video streaming study, ii) the systematic development and improvement of Multipath TCP schedulers, and iii) research on a distributed topology graph pattern matching algorithm. With this work, we make MACI publicly available to the research community to advance efficient and reproducible network experiments

    Collaborative Uploading in Heterogeneous Networks: Optimal and Adaptive Strategies

    Full text link
    Collaborative uploading describes a type of crowdsourcing scenario in networked environments where a device utilizes multiple paths over neighboring devices to upload content to a centralized processing entity such as a cloud service. Intermediate devices may aggregate and preprocess this data stream. Such scenarios arise in the composition and aggregation of information, e.g., from smartphones or sensors. We use a queuing theoretic description of the collaborative uploading scenario, capturing the ability to split data into chunks that are then transmitted over multiple paths, and finally merged at the destination. We analyze replication and allocation strategies that control the mapping of data to paths and provide closed-form expressions that pinpoint the optimal strategy given a description of the paths' service distributions. Finally, we provide an online path-aware adaptation of the allocation strategy that uses statistical inference to sequentially minimize the expected waiting time for the uploaded data. Numerical results show the effectiveness of the adaptive approach compared to the proportional allocation and a variant of the join-the-shortest-queue allocation, especially for bursty path conditions.Comment: 15 pages, 11 figures, extended version of a conference paper accepted for publication in the Proceedings of the IEEE International Conference on Computer Communications (INFOCOM), 201

    Stochastic bounds in fork-join queueing systems under full and partial mapping

    Get PDF
    In a Fork-Join (FJ) queueing system an upstream fork station splits incoming jobs into N tasks to be further processed by N parallel servers, each with its own queue; the response time of one job is determined, at a downstream join station, by the maximum of the corresponding tasks’ response times. This queueing system is useful to the modelling of multi-service systems subject to synchronization constraints, such as MapReduce clusters or multipath routing. Despite their apparent simplicity, FJ systems are hard to analyze. This paper provides the first computable stochastic bounds on the waiting and response time distributions in FJ systems under full (bijective) and partial (injective) mapping of tasks to servers. We consider four practical scenarios by combining 1a) renewal and 1b) non-renewal arrivals, and 2a) non-blocking and 2b) blocking servers. In the case of non-blocking servers we prove that delays scale as O(log N), a law which is known for first moments under renewal input only. In the case of blocking servers, we prove that the same factor of log N dictates the stability region of the system. Simulation results indicate that our bounds are tight, especially at high utilizations, in all four scenarios. A remarkable insight gained from our results is that, at moderate to high utilizations, multipath routing “makes sense” from a queueing perspective for two paths only, i.e., response times drop the most when N = 2; the technical explanation is that the resequencing (delay) price starts to quickly dominate the tempting gain due to multipath transmissions

    On the Fidelity Distribution of Link-level Entanglements under Purification

    Full text link
    Quantum entanglement is the key to quantum communications over considerable distances. The first step for entanglement distribution among quantum communication nodes is to generate link-level Einstein-Podolsky-Rosen (EPR) pairs between adjacent communication nodes. EPR pairs may be continuously generated and stored in a few quantum memories to be ready for utilization by quantum applications. A major challenge is that qubits suffer from unavoidable noise due to their interaction with the environment, which is called decoherence. This decoherence results in the known exponential decay model of the fidelity of the qubits with time, thus, limiting the lifetime of a qubit in a quantum memory and the performance of quantum applications. In this paper, we evaluate the fidelity of the stored EPR pairs under two opposite dynamical and probabilistic phenomena, first, the aforementioned decoherence and second purification, i.e. an operation to improve the fidelity of an EPR pair at the expense of sacrificing another EPR pair. Instead of applying the purification as soon as two EPR pairs are generated, we introduce a Purification scheme Beyond the Generation time (PBG) of two EPR pairs. We analytically show the probability distribution of the fidelity of stored link-level EPR pairs in a system with two quantum memories at each node allowing a maximum of two stored EPR pairs. In addition, we apply a PBG scheme that purifies the two stored EPR pairs upon the generation of an additional one. We finally provide numerical evaluations of the analytical approach and show the fidelity-rate trade-off of the considered purification scheme

    Little Boxes: A Dynamic Optimization Approach for Enhanced Cloud Infrastructures

    Full text link
    The increasing demand for diverse, mobile applications with various degrees of Quality of Service requirements meets the increasing elasticity of on-demand resource provisioning in virtualized cloud computing infrastructures. This paper provides a dynamic optimization approach for enhanced cloud infrastructures, based on the concept of cloudlets, which are located at hotspot areas throughout a metropolitan area. In conjunction, we consider classical remote data centers that are rigid with respect to QoS but provide nearly abundant computation resources. Given fluctuating user demands, we optimize the cloudlet placement over a finite time horizon from a cloud infrastructure provider's perspective. By the means of a custom tailed heuristic approach, we are able to reduce the computational effort compared to the exact approach by at least three orders of magnitude, while maintaining a high solution quality with a moderate cost increase of 5.8% or less

    Multi-Provider Service Chain Embedding With Nestor

    Get PDF
    Network function (NF) virtualization decouples NFs from the underlying middlebox hardware and promotes their deployment on virtualized network infrastructures. This essentially paves the way for the migration of NFs into clouds (i.e., NF-as-a-Service), achieving a drastic reduction of middlebox investment and operational costs for enterprises. In this context, service chains (expressing middlebox policies in the enterprise network) should be mapped onto datacenter networks, ensuring correctness, resource efficiency, as well as compliance with the provider's policy. The network service embedding (NSE) problem is further exacerbated by two challenging aspects: 1) traffic scaling caused by certain NFs (e.g., caches and WAN optimizers) and 2) NF location dependencies. Traffic scaling requires resource reservations different from the ones specified in the service chain, whereas NF location dependencies, in conjunction with the limited geographic footprint of NF providers (NFPs), raise the need for NSE across multiple NFPs. In this paper, we present a holistic solution to the multi-provider NSE problem. We decompose NSE into: 1) NF-graph partitioning performed by a centralized coordinator and 2) NF-subgraph mapping onto datacenter networks. We present linear programming formulations to derive near-optimal solutions for both problems. We address the challenging aspect of traffic scaling by introducing a new service model that supports demand transformations. We also define topology abstractions for NF-graph partitioning. Furthermore, we discuss the steps required to embed service chains across multiple NFPs, using our NSE orchestrator (Nestor). We perform an evaluation study of multi-provider NSE with emphasis on NF-graph partitioning optimizations tailored to the client and NFPs. Our evaluation results further uncover significant savings in terms of service cost and resource consumption due to the demand transformations. © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works..EU/FP7/T-NOVA/619520DFG/Collaborative Research Center/1053 (MAKI)EU/FP7/T-NOVADFG/CRC/105

    Blockchain and Smart Contracts: Disruptive Technologies for the Insurance Market

    Get PDF
    Blockchain technologies paired with smart contracts exhibit the potential to transform the global insurance industry. The recent evolution of smart contracts and their fast adoption allow to rethink processes and to challenge traditional structures. Therefore, a special focus is on the analysis of the underlying technology and recent improvements. Further, we provide an overview of how the insurance sector may be affected by blockchain technology. We emphasize current challenges and limitations through analyzing two promising use cases in this area. We find that realizing the full potential of the blockchain technology requires overcoming several challenges including scalability, the incorporation of external information, flexibility, and permissioning schemes
    corecore