5,907 research outputs found

    Resolution strategies for serverless computing in information centric networking

    Get PDF
    Named Function Networking (NFN) offers to compute and deliver results of computations in the context of Information Centric Networking (ICN). While ICN offers data delivery without specifying the location where these data are stored, NFN offers the production of results without specifying where the actual computation is executed. In NFN, computation workflows are encoded in (ICN style) Interest Messages using the lambda calculus and based on these workflows, the network will distribute computations and find execution locations. Depending on the use case of the actual network, the decision where to execute a compuation can be different: A resolution strategy running on each node decides if a computation should be forwarded, split into subcomputations or executed locally. This work focuses on the design of resolution strategies for selected scenarios and the online derivation of "execution plans" based on network status and history. Starting with a simple resolution strategy suitable for data centers, we focus on improving load distribution within the data center or even between multiple data centers. We have designed resolution strategies that consider the size of input data and the load on nodes, leading to priced execution plans from which one can select the ones with the least costs. Moreover, we use these plans to create execution templates: Templates can be used to create a resolution strategy by simulating the execution using the planning system, tailored to the specific use case at hand. Finally we designed a resolution strategy for edge computing which is able to handle mobile scenarios typical for vehicular networking. This “mobile edge computing resolution strategy” handles the problem of frequent handovers to a sequence of road-side units without creating additional overhead for the non-mobile use case. All these resolution strategies were evaluated using a simulation system and were compared to the state of the art behavior of data center execution environments and/or cloud configurations. In the case of the vehicular networking strategy, we enhanced existing road-side units and implemented our NFN-based system and plan derivation such that we were able to run and validate our solution in real world tests for mobile edge computing

    Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems

    Get PDF
    The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft

    JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution

    Full text link
    Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE

    Towards Dynamic Vehicular Clouds

    Get PDF
    Motivated by the success of the conventional cloud computing, Vehicular Clouds were introduced as a group of vehicles whose corporate computing, sensing, communication, and physical resources can be coordinated and dynamically allocated to authorized users. One of the attributes that set Vehicular Clouds apart from conventional clouds is resource volatility. As vehicles enter and leave the cloud, new computing resources become available while others depart, creating a volatile environment where the task of reasoning about fundamental performance metrics becomes very challenging. The goal of this thesis is to design an architecture and model for a dynamic Vehicular Cloud built on top of moving vehicles on highways. We present our envisioned architecture for dynamic Vehicular Cloud, consisting of vehicles moving on the highways and multiple communication stations installed along the highway, and investigate the feasibility of such systems. The dynamic Vehicular Cloud is based on two-way communications between vehicles and the stations. We provide a communication protocol for vehicle-to-infrastructure communications enabling a dynamic Vehicular Cloud. We explain the structure of the proposed protocol in detail and then provide analytical predictions and simulation results to investigate the accuracy of our design and predictions. Just as in conventional clouds, job completion time ranks high among the fundamental quantitative performance figures of merit. In general, predicting job completion time requires full knowledge of the probability distributions of the intervening random variables. More often than not, however, the data center manager does not know these distribution functions. Instead, using accumulated empirical data, she may be able to estimate the first moments of these random variables. Yet, getting a handle on the expected job completion time is a very important problem that must be addressed. With this in mind, another contribution of this thesis is to offer easy-to-compute approximations of job completion time in a dynamic Vehicular Cloud involving vehicles on a highway. We assume estimates of the first moment of the time it takes the job to execute without any overhead attributable to the working of the Vehicular Cloud. A comprehensive set of simulations have shown that our approximations are very accurate. As mentioned, a major difference between the conventional cloud and the Vehicular Cloud is the availability of the computational nodes. The vehicles, which are the Vehicular Cloud\u27s computational resources, arrive and depart at random times, and as a result, this characteristic may cause failure in executing jobs and interruptions in the ongoing services. To handle these interruptions, once a vehicle is ready to leave the Vehicular Cloud, if the vehicle is running a job, the job and all intermediate data stored by the departing vehicle must be migrated to an available vehicle in the Vehicular Cloud

    Cloud-Assisted Safety Message Dissemination in VANET-Cellular Heterogeneous Wireless Network

    Get PDF
    In vehicular ad hoc networks (VANETs), efficient message dissemination is critical to road safety and traffic efficiency. Since many VANET-based schemes suffer from high transmission delay and data redundancy, the integrated VANET–cellular heterogeneous network has been proposed recently and attracted significant attention. However, most existing studies focus on selecting suitable gateways to deliver safety message from the source vehicle to a remote server, whereas rapid safety message dissemination from the remote server to a targeted area has not been well studied. In this paper, we propose a framework for rapid message dissemination that combines the advantages of diverse communication and cloud computing technologies. Specifically, we propose a novel Cloud-assisted Message Downlink dissemination Scheme (CMDS), with which the safety messages in the cloud server are first delivered to the suitable mobile gateways on relevant roads with the help of cloud computing (where gateways are buses with both cellular and VANET interfaces), and then being disseminated among neighboring vehicles via vehicle-to-vehicle (V2V) communication. To evaluate the proposed scheme, we mathematically analyze its performance and conduct extensive simulation experiments. Numerical results confirm the efficiency of CMDS in various urban scenarios

    Let Opportunistic Crowdsensors Work Together for Resource-efficient, Quality-aware Observations

    Get PDF
    International audienceOpportunistic crowdsensing empowers citizens carrying hand-held devices to sense physical phenomena of common interest at a large and fine-grained scale without requiring the citizens' active involvement. However, the resulting uncontrolled collection and upload of the massive amount of contributed raw data incur significant resource consumption, from the end device to the server, as well as challenge the quality of the collected observations. This paper tackles both challenges raised by opportunistic crowdsensing, that is, enabling the resource-efficient gathering of relevant observations. To achieve so, we introduce the BeTogether middleware fostering context-aware, collaborative crowdsensing at the edge so that co-located crowdsensors operating in the same context, group together to share the work load in a cost- and quality-effective way. We evaluate the proposed solution using an implementation-driven evaluation that leverages a dataset embedding nearly 1 million entries contributed by 550 crowdsensors over a year. Results show that BeTogether increases the quality of the collected data while reducing the overall resource cost compared to the cloud-centric approach
    • …
    corecore