428 research outputs found

    Getting the Most Out of Your VNFs: Flexible Assignment of Service Priorities in 5G

    Full text link
    Through their computational and forwarding capabilities, 5G networks can support multiple vertical services. Such services may include several common virtual (network) functions (VNFs), which could be shared to increase resource efficiency. In this paper, we focus on the seldom studied VNF-sharing problem, and decide (i) whether sharing a VNF instance is possible/beneficial or not, (ii) how to scale virtual machines hosting the VNFs to share, and (iii) the priorities of the different services sharing the same VNF. These decisions are made with the aim to minimize the mobile operator's costs while meeting the verticals' performance requirements. Importantly, we show that the aforementioned priorities should not be determined a priori on a per-service basis, rather they should change across VNFs since such additional flexibility allows for more efficient solutions. We then present an effective methodology called FlexShare, enabling near-optimal VNF-sharing decisions in polynomial time. Our performance evaluation, using real-world VNF graphs, confirms the effectiveness of our approach, which consistently outperforms baseline solutions using per-service priorities

    The Price of Fog: a Data-Driven Study on Caching Architectures in Vehicular Networks

    Get PDF
    Vehicular users are expected to consume large amounts of data, for both entertainment and navigation purposes. This will put a strain on cellular networks, which will be able to cope with such a load only if proper caching is in place, this in turn begs the question of which caching architecture is the best-suited to deal with vehicular content consumption. In this paper, we leverage a large-scale, crowd-collected trace to (i) characterize the vehicular traffic demand, in terms of overall magnitude and content breakup, (ii) assess how different caching approaches perform against such a real-world load, (iii) study the effect of recommendation systems and local contents. We define a price-of-fog metric, expressing the additional caching capacity to deploy when moving from traditional, centralized caching architectures to a "fog computing" approach, where caches are closer to the network edge. We find that for location-specific contents, such as the ones that vehicular users are most likely to request, such a price almost disappears. Vehicular networks thus make a strong case for the adoption of mobile-edge caching, as we are able to reap the benefit thereof -- including a reduction in the distance traveled by data, within the core network -- with little or no of the associated disadvantages.Comment: ACM IoV-VoI 2016 MobiHoc Workshop, The 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing: MobiHoc 2016-IoV-VoI Workshop, Paderborn, German

    Traffic Offloading/Onloading in Multi-RAT Cellular Networks

    Get PDF
    We analyze next generation cellular networks, offering connectivity to mobile users through multiple radio access technologies (RATs), namely LTE and WiFi. We develop a framework based on the Markovian agent formalism, which can model several aspects of the system, including user traffic dynamics and radio resource allocation. In particular, through a mean-field solution, we show the ability of our framework to capture the system behavior in flash-crowd scenarios, i.e., when a burst of traffic requests takes place in some parts of the network service area. We consider a distributed strategy for the user RAT selection, which aims at ensuring high user throughput, and investigate its performance under different resource allocation scheme

    5G Traffic Forecasting: If Verticals and Mobile Operators Cooperate

    Get PDF
    15th Annual Conference on Wireless On-demand Network Systems and Services (WONS)In 5G research, it is traditionally assumed that vertical industries (a.k.a verticals) set the performance requirements for the services they want to offer to mobile users, and the mobile operators alone are in charge of orchestrating their resources so as to meet such requirements. Motivated by the observation that successful orchestration requires reliable traffic predictions, in this paper we investigate the effects of having the verticals, instead of the mobile operators, performing such predictions. Leveraging a real-world, large-scale, crowd-sourced trace, we find that involving the verticals in the prediction process reduces the prediction errors and improves the quality of the resulting orchestration decisions.This work is supported by the European Commission through the H2020 projects 5G-TRANSFORMER (Project ID 761536) and 5G-EVE (Project ID 815074)

    Towards Node Liability in Federated Learning: Computational Cost and Network Overhead

    Get PDF
    Many machine learning (ML) techniques suf- fer from the drawback that their output (e.g., a classifi- cation decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, e.g., identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this paper we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits to identify the source of the training information that most contributed to a given decision. After discussing NL-FLā€™s cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy

    Towards Node Liability in Federated Learning: Computational Cost and Network Overhead

    Get PDF
    Many machine learning (ML) techniques suffer from the drawback that their output (e.g., a classification decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, e.g., identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this paper we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits to identify the source of the training information that most contributed to a given decision. After discussing NL-FL's cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy

    Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost

    Get PDF
    Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent). In this work, we highlight how fog- and IoT-based scenarios often require com- bining both approaches, and we present a framework for flexible parallel learning (FPL), achieving both data and model paral- lelism. Further, we investigate how different ways of distributing and parallelizing learning tasks across the participating nodes result in different computation, communication, and energy costs. Our experiments, carried out using state-of-the-art deep-network architectures and large-scale datasets, confirm that FPL allows for an excellent trade-off among computational (hence energy) cost, communication overhead, and learning performance
    • ā€¦
    corecore