6 research outputs found

    Profile-based Resource Allocation for Virtualized Network Functions

    Get PDF
    Accepted in IEEE TNSM Journalhttps://ieeexplore.ieee.org/document/8848599International audienceThe virtualization of compute and network resources enables an unseen flexibility for deploying network services. A wide spectrum of emerging technologies allows an ever-growing range of orchestration possibilities in cloud-based environments. But in this context it remains challenging to rhyme dynamic cloud configurations with deterministic performance. The service operator must somehow map the performance specification in the Service Level Agreement (SLA) to an adequate resource allocation in the virtualized infrastructure. We propose the use of a VNF profile to alleviate this process. This is illustrated by profiling the performance of four example network functions (a virtual router, switch, firewall and cache server) under varying workloads and resource configurations. We then compare several methods to derive a model from the profiled datasets. We select the most accurate method to further train a model which predicts the services' performance, in function of incoming workload and allocated resources. Our presented method can offer the service operator a recommended resource allocation for the targeted service, in function of the targeted performance and maximum workload specified in the SLA. This helps to deploy the softwarized service with an optimal amount of resources to meet the SLA requirements, thereby avoiding unnecessary scaling steps

    Adaptive & learning-aware orchestration of content delivery services

    Get PDF
    Many media services undergo a varying workload, showing periodic usage patterns or unexpected traffic surges. As cloud and NFV services are increasingly softwarized, they enable a fully dynamic deployment and scaling behaviour. At the same time, there is an increasing need for fast and efficient mechanisms to allocate sufficient resources with the same elasticity, only when they are needed. This requires adequate performance models of the involved services, as well as awareness of those models in the involved orchestration machinery. In this paper we present how a scalable content delivery service can be deployed in a resource- and time-efficient manner, using adaptive machine learning models for performance profiling. We include orchestration mechanisms which are able to act upon the profiled knowledge in a dynamic manner. Using an offline profiled performance model of the service, we are able to optimize the online service orchestration, requiring fewer scaling iterations

    Dynamic Firewall Rule Building Engine for Hybrid Cloud

    Get PDF
    Growth in the cloud computing resource management aspects have also increases the risks associated with the services hosted on the cloud.The foundational challenges are firstly to accommodate the ease of access without violating the security aspects and the time constraints to provide service responses on time, to provide the security to the active services. A good number of researches are observed to achieve the best firewall security to the hosted services. Nonetheless, the existing methods are often criticised for higher complexity and the non-performing aspects for the newer attack types. Henceforth, this work aims to solve the existing research bottlenecks. The proposed work firstly aims to solve the higher complexity of the deployed strategy by reducing the attribute sets with a sense of accuracy, time complexity and information loss. Further, this work proposes a dynamic firewall rule engine design strategy for detection of the attacks using the thresholding method. The proposed algorithms are testing on the benchmarked KDD dataset and as outcome a nearly 99.7% accuracy is observed and the time complexity is reduced to nearly 40%. Hence, this proposed work demonstrates a state-of-the-art technology for firewall design for hybrid cloud and shall be considered as a new benchmark in this domain of the research

    iOn-Profiler: intelligent Online multi-objective VNF Profiling with Reinforcement Learning

    Full text link
    Leveraging the potential of Virtualised Network Functions (VNFs) requires a clear understanding of the link between resource consumption and performance. The current state of the art tries to do that by utilising Machine Learning (ML) and specifically Supervised Learning (SL) models for given network environments and VNF types assuming single-objective optimisation targets. Taking a different approach poses a novel VNF profiler optimising multi-resource type allocation and performance objectives using adapted Reinforcement Learning (RL). Our approach can meet Key Performance Indicator (KPI) targets while minimising multi-resource type consumption and optimising the VNF output rate compared to existing single-objective solutions. Our experimental evaluation with three real-world VNF types over a total of 39 study scenarios (13 per VNF), for three resource types (virtual CPU, memory, and network link capacity), verifies the accuracy of resource allocation predictions and corresponding successful profiling decisions via a benchmark comparison between our RL model and SL models. We also conduct a complementary exhaustive search-space study revealing that different resources impact performance in varying ways per VNF type, implying the necessity of multi-objective optimisation, individualised examination per VNF type, and adaptable online profile learning, such as with the autonomous online learning approach of iOn-Profiler.Comment: 22 pages, 12 figures, 8 tables, journal article pre-print versio

    Performance Characterization and Profiling of Chained CPU-bound Virtual Network Functions

    Get PDF
    The increased demand for high-quality Internet connectivity resulting from the growing number of connected devices and advanced services has put significant strain on telecommunication networks. In response, cutting-edge technologies such as Network Function Virtualization (NFV) and Software Defined Networking (SDN) have been introduced to transform network infrastructure. These innovative solutions offer dynamic, efficient, and easily manageable networks that surpass traditional approaches. To fully realize the benefits of NFV and maintain the performance level of specialized equipment, it is critical to assess the behavior of Virtual Network Functions (VNFs) and the impact of virtualization overhead. This paper delves into understanding how various factors such as resource allocation, consumption, and traffic load impact the performance of VNFs. We aim to provide a detailed analysis of these factors and develop analytical functions to accurately describe their impact. By testing VNFs on different testbeds, we identify the key parameters and trends, and develop models to generalize VNF behavior. Our results highlight the negative impact of resource saturation on performance and identify the CPU as the main bottleneck. We also propose a VNF profiling procedure as a solution to model the observed trends and test more complex VNFs deployment scenarios to evaluate the impact of interconnection, co-location, and NFV infrastructure on performance

    VNF performance modelling : from stand-alone to chained topologies

    Get PDF
    One of the main incentives for deploying network functions on a virtualized or cloud-based infrastructure, is the ability for on-demand orchestration and elastic resource scaling following the workload demand. This can also be combined with a multi-party service creation cycle: the service provider sources various network functions from different vendors or developers, and combines them into a modular network service. This way, multiple virtual network functions (VNFs) are connected into more complex topologies called service chains. Deployment speed is important here, and it is therefore beneficial if the service provider can limit extra validation testing of the combined service chain, and rely on the provided profiling results of the supplied single VNFs. Our research shows that it is however not always evident to accurately predict the performance of a total service chain, from the isolated benchmark or profiling tests of its discrete network functions. To mitigate this, we propose a two-step deployment workflow: First, a general trend estimation for the chain performance is derived from the stand-alone VNF profiling results, together with an initial resource allocation. This information then optimizes the second phase, where online monitored data of the service chain is used to quickly adjust the estimated performance model where needed. Our tests show that this can lead to a more efficient VNF chain deployment, needing less scaling iterations to meet the chain performance specification, while avoiding the need for a complete proactive and time-consuming VNF chain validation
    corecore