2 research outputs found

    Dynamic Network Function Provisioning to Enable Network in Box for Industrial Applications

    Get PDF
    Network function virtualization (NFV) in 6G can use standard virtualization techniques to enable network functions via software. Resource scheduling is one of the key research areas of NFV in 6G and is mainly used to deploy service function chains (SFCs) in substrate networks. However, determining how to utilize network resources efficiently has always been a difficult problem in SFC deployment. This article focuses on how to efficiently provision online SFC requests in NFV with 6G. We first establish a mathematical model for the problem of online SFC provisioning. Then, we propose an efficient online service function chain deployment (OSFCD) algorithm that selects the path to deploy that is close to the SFC length. Finally, we compare our proposed algorithm with three other existing algorithms by simulation experiments. The experimental results show that the OSFCD algorithm optimizes multiple performance indicators of online SFC deployment

    Automatic Selection of Security Service Function Chaining Using Reinforcement Learning

    Full text link
    © 2018 IEEE. When selecting security Service Function Chaining (SFC) for network defense, operators usually take security performance, service quality, deployment cost, and network function diversity into consideration, formulating as a multi-objective optimization problem. However, as applications, users, and data volumes grow massively in networks, traditional mathematical approaches cannot be applied to online security SFC selections due to high execution time and uncertainty of network conditions. Thus, in this paper, we utilize reinforcement learning, specifically, the Q-learning algorithm to automatically choose proper security SFC for various requirements. Particularly, we design a reward function to make a tradeoff among different objectives and modify the standard -greedy based exploration to pick out multiple ranked actions for diversified network defense. We compare the Q-learning with mathematical optimization-based approaches, which are assumed to know network state changes in advance. The training and testing results show that the Q-learning based approach can capture changes of network conditions and make a tradeoff among different objectives
    corecore