22 research outputs found

    Network-Aware Stream Query Processing in Mobile Ad-Hoc Networks

    Get PDF

    Live Service Migration in Mobile Edge Clouds

    No full text
    Mobile edge clouds (MECs) bring the benefits of the cloud closer to the user, by installing small cloud infrastructures at the network edge. This enables a new breed of real-time applications, such as instantaneous object recognition and safety assistance in intelligent transportation systems, that require very low latency. One key issue that comes with proximity is how to ensure that users always receive good performance as they move across different locations. Migrating services between MECs is seen as the means to achieve this. This article presents a layered framework for migrating active service applications that are encapsulated either in virtual machines (VMs) or containers. This layering approach allows a substantial reduction in service downtime. The framework is easy to implement using readily available technologies, and one of its key advantages is that it supports containers, which is a promising emerging technology that offers tangible benefits over VMs. The migration performance of various real applications is evaluated by experiments under the presented framework. Insights drawn from the experimentation results are discussed

    Distributed on-line schedule adaptation for balanced slot allocation in wireless ad hoc networks

    No full text
    We propose an algorithm for design and on the fly modification of the schedule of a wireless ad hoc network for provision of fair service guarantees under topological changes. The primary objective is to derive a distributed coordination method for schedule construction and modification for any wireless ad-hoc network operating under a schedule where transmissions at each slot are explicitly specified over a time period of length T. We first introduce a fluid model of the system where the conflict avoidance requirements of neighboring links are relaxed while the aspect of local channel sharing is captured. In this model we propose an algorithm where the nodes asynchronously re-adjust the rates allocated to their adjacent links using only local information. We prove that, from any initial condition, the algorithm finds the max-min fair rate allocation in the fluid model. Hence, if the iteration is performed constantly the rate allocation will track the optimal even in regimes of constant topology changes. Then we consider the slotted system and propose a modification method that applies directly on the slotted schedule, emulating the effect of the rate re-adjustment iteration of the fluid model. Through extensive experiments in networks with both fixed and time varying topologies we show that the latter algorithm achieves balanced rate allocations in the actual slotted system that are very close to the max-min fair rates. The experiments also show that the algorithm is very robust on topology variations, with very good tracking properties of the max-min fair rate allocation

    Distributed dynamic scheduling for end-to-end rate guarantees in wireless ad hoc networks

    Get PDF
    We present a framework for the provision of deterministic end-to-end bandwidth guarantees in wireless ad hoc networks. Guided by a set of local feasibility conditions, multi-hop sessions are dynamically offered allocations, further translated to link demands. Using a distributed Time Division Multiple Access (TDMA) protocol nodes adapt to the demand changes on their adjacent links by local, conflict-free slot reassignments. As soon as the demand changes stabilize, the nodes must incrementally converge to a TDMA schedule that realizes the global link (and session) demand allocation. We first derive sufficient local feasibility conditions for certain topology classes and show that trees can be maximally utilized. We then introduce a converging distributed link scheduling algorithm that exploits the logical tree structure that arises in several ad hoc network applications. Decoupling bandwidth allocation to multi-hop sessions from link scheduling allows support of various end-to-end Quality of Service (QoS) objectives. We focus on the max-min fairness (MMF) objective and design an end-to-end asynchronous distributed algorithm for the computation of the session MMF rates. Once the end-to-end algorithm converges, the link scheduling algorithm converges to a TDMA schedule that realizes these rates. We demonstrate the applicability of this framework through an implementation over an existing wireless technology. This implementation is free of restrictive assumptions of previous TDMA approaches: it does not require any a-priori knowledge on the number of nodes in the network nor even network-wide slot synchronization. Copyright 2005 ACM

    Asynchronous TDMA ad hoc networks: scheduling and performance

    No full text
    A common assumption of time division multiple access (TDMA)-based wireless ad hoc networks is the existence of network-wide slot synchronization. Such a mechanism is difficult to support in practice. In asynchronous TDMA systems, each link uses a local time slot reference provided by the hardware clock tick of one of the node endpoints. Inevitably, slots are wasted when nodes switch time slot references. This restricts the rate allocations that can be supported when compared to a perfectly synchronized system. To address this practical performance issue, we first introduce a general framework for conflict-free scheduling in asynchronous TDMA networks. We then propose scheduling algorithms that target overhead minimization while ensuring upper bounds on the generated overhead. Through extensive simulations the algorithm performances are evaluated in the context of Bluetooth, a wireless technology that operates according to the asynchronous TDMA communication paradigm. Copyright (C) 2004 AEI

    On optimal cooperative route caching in large, memory-limited wireless ad hoc networks

    No full text
    Caching is a popular mechanism for enhancing performance in various layers and applications of computer networking. We introduce both a model and algorithms for caching routing information in large, memory-limited wireless ad hoc networks. Each host can cache only a small fraction of the network and must rely on flooding to acquire information that has not been locally cached. To constrain flooding, the network uses a cooperative caching model where every node provides its route cache contents to others when they flood. Given the host memory capacity limitations, we are faced with the problem of allocating destinations to caches in an efficient manner. We propose the class of Best State/Best Cost (BSBC) cooperative caching algorithms that aim to minimize the overall network search effort

    Machine learning for dynamic resource allocation at network edge

    No full text
    With the proliferation of smart devices, it is increasingly important to exploit their computing, networking, and storage resources for executing various computing tasks at scale at mobile network edges, bringing many benefits such as better response time, network bandwidth savings, and improved data privacy and security. A key component in enabling such distributed edge computing is a mechanism that can flexibly and dynamically manage edge resources for running various military and commercial applications in a manner adaptive to the fluctuating demands and resource availability. We present methods and an architecture for the edge resource management based on machine learning techniques. A collaborative filtering approach combined with deep learning is proposed as a means to build the predictive model for applications’ performance on resources from previous observations, and an online resource allocation architecture utilizing the predictive model is presented. We also identify relevant research topics for further investigation

    When edge meets learning: adaptive control for resource-constrained distributed machine learning

    Get PDF
    Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient- descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions
    corecore