37 research outputs found

    An adaptive scaling mechanism for managing performance variations in network functions virtualization: A case study in an NFV-based EPC

    Get PDF
    The scaling is a fundamental task that allows addressing performance variations in Network Functions Virtualization (NFV). In the literature, several approaches propose scaling mechanisms that differ in the utilized technique (e.g., reactive, predictive and machine learning-based). The scaling in NFV must be accurate both at the time and the number of instances to be scaled, aiming at avoiding unnecessary procedures of provisioning and releasing of resources; however, achieving a high accuracy is a non-trivial task. In this paper, we propose for NFV an adaptive scaling mechanism based on Q-Learning and Gaussian Processes that are utilized by an agent to carry out an improvement strategy of a scaling policy, and therefore, to make better decisions for managing performance variations. We evaluate our mechanism by simulations, in a case study in a virtualized Evolved Packet Core, corroborating that it is more accurate than approaches based on static threshold rules and Q-Learning without a policy improvement strategy

    An SDN-based solution for horizontal auto-scaling and load balancing of transparent VNF clusters

    Get PDF
    © 2021 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/)This paper studies the problem of the dynamic scaling and load balancing of transparent virtualized network functions (VNFs). It analyzes different particularities of this problem, such as loop avoidance when performing scaling-out actions, and bidirectional flow affinity. To address this problem, a software-defined networking (SDN)-based solution is implemented consisting of two SDN controllers and two OpenFlow switches (OFSs). In this approach, the SDN controllers run the solution logic (i.e., monitoring, scaling, and load-balancing modules). According to the SDN controllers instructions, the OFSs are responsible for redirecting traffic to and from the VNF clusters (i.e., load-balancing strategy). Several experiments were conducted to validate the feasibility of this proposed solution on a real testbed. Through connectivity tests, not only could end-to-end (E2E) traffic be successfully achieved through the VNF cluster, but the bidirectional flow affinity strategy was also found to perform well because it could simultaneously create flow rules in both switches. Moreover, the selected CPU-based load-balancing method guaranteed an average imbalance below 10% while ensuring that new incoming traffic was redirected to the least loaded instance without requiring packet modification. Additionally, the designed monitoring function was able to detect failures in the set of active members in near real-time and active new instances in less than a minute. Likewise, the proposed auto-scaling module had a quick response to traffic changes. Our solution showed that the use of SDN controllers along with OFS provides great flexibility to implement different load-balancing, scaling, and monitoring strategies.Postprint (published version

    MPTCP Robustness Against Large-Scale Man-in-the-Middle Attacks

    Get PDF
    International audienceMultipath communications at the Internet scale have been a myth for a long time, with no actual protocol being deployed at large scale. Recently, the Multipath Transmission Control Protocol (MPTCP) extension was standardized and is undergoing rapid adoption in many different use-cases, from mobile to fixed access networks, from data-centers to core networks. Among its major benefits-i.e., reliability thanks to backup path rerouting, through-put increase thanks to link aggregation, and confidentiality being more difficult to intercept a full connection-the latter has attracted lower attention. How effective would be to use MPTCP, or an equivalent multipath transport layer protocol, to exploit multiple Internet-scale paths and decrease the probability of Man-in-the-Middle (MITM) attacks is a question which we try to answer. By analyzing the Autonomous System (AS) level graph, we identify which countries and regions show a higher level of robustness against MITM AS-level attacks, for example due to core cable tapping or route hijacking practices.

    System of Systems Lifecycle Management: A New Concept Based on Process Engineering Methodologies

    Get PDF
    In order to tackle interoperability issues of large-scale automation systems, SOA (Service-Oriented Architecture) principles, where information exchange is manifested by systems providing and consuming services, have already been introduced. However, the deployment, operation, and maintenance of an extensive SoS (System of Systems) mean enormous challenges for system integrators as well as network and service operators. The existing lifecycle management approaches do not cover all aspects of SoS management; therefore, an integrated solution is required. The purpose of this paper is to introduce a new lifecycle approach, namely the SoSLM (System of Systems Lifecycle Management). This paper first provides an in-depth description and comparison of the most relevant process engineering methodologies and ITSM (Information Technology Service Management) frameworks, and how they affect various lifecycle management strategies. The paper’s novelty strives to introduce an Industry 4.0-compatible PLM (Product Lifecycle Management) model and to extend it to cover SoS management-related issues on well-known process engineering methodologies. The presented methodologies are adapted to the PLM model, thus creating the recommended SoSLM model. This is supported by demonstrations of how the IIoT (Industrial Internet of Things) applications and services can be developed and handled. Accordingly, complete implementation and integration are presented based on the proposed SoSLM model, using the Arrowhead framework that is available for IIoT SoS. View Full-Tex

    Economic Analysis of a Multi-Sided Platform for Sensor-Based Services in the Internet of Things

    Full text link
    [EN] A business model for sensor-based services is proposed where a platform creates a multi-sided market. The business model comprises a platform that serves as an intermediary between human users, app developers, and sensor networks, so that the users use the apps and the apps process the data supplied by the sensor networks. The platform, acting as a monopolist, posts a fee for each of the three sides so as to maximize its profit. This business model intends to mimic the market-creating innovation that main mobile apps platforms have generated in the smartphone sector. We conduct an analysis of the profit maximization problem faced by the platform, show that optimum prices exist for any parameter value, and show that these prices always induce an equilibrium in the number of agents from each side that join the platform. We show that the relative strength of the value that advertisers attach to the users determines the platform price structure. Depending on the value of this relative strength, two alternative subsidizing strategies are feasible: to subsidize either the users¿ subscription or the developers¿ registration. Finally, all agents benefit from an increase in the population at any of the three sides. This result provides a rationale for incentivizing not only the user participation, but also the entry of developer undertakings and the deployment of wireless sensor network infrastructure.This work has been supported by the Spanish Ministry of Economy and Competitiveness through Project TIN2013-47272-C2-1-R (co-supported by the European Social Fund) and by Institute ITACA-UPVthrough "Convocatorias Ayudas 2019-5"Guijarro, L.; Vidal Catalá, JR.; Pla, V.; Naldi, M. (2019). Economic Analysis of a Multi-Sided Platform for Sensor-Based Services in the Internet of Things. Sensors. 19(2):1-23. https://doi.org/10.3390/s19020373S12319

    Elastic provisioning of network and computing resources at the edge for IoT services

    Get PDF
    The fast growth of Internet-connected embedded devices demands new system capabilities at the network edge, such as provisioning local data services on both limited network and computational resources. The current contribution addresses the previous problem by enhancing the usage of scarce edge resources. It designs, deploys, and tests a new solution that incorporates the positive functional advantages offered by software-defined networking (SDN), network function virtual-ization (NFV), and fog computing (FC). Our proposal autonomously activates or deactivates embedded virtualized resources, in response to clients’ requests for edge services. Complementing existing literature, the obtained results from extensive tests on our programmable proposal show the superior performance of the proposed elastic edge resource provisioning algorithm, which also assumes a SDN controller with proactive OpenFlow behavior. According to our results, the maximum flow rate for the proactive controller is 15% higher; the maximum delay is 83% smaller; and the loss is 20% smaller compared to when the non-proactive controller is in operation. This improvement in flow quality is complemented by a reduction in control channel workload. The controller also records the time duration of each edge service session, which can enable the ac-counting of used resources per session.info:eu-repo/semantics/publishedVersio

    Engineering Self-Adaptive Applications on Software Defined Infrastructure

    Get PDF
    Cloud computing is a flexible platform that offers faster innovation, elastic resources, and economies of scale. However, it is challenging to ensure non-functional properties such as performance, cost and security of applications hosted in cloud. Applications should be adaptive to the fluctuating workload to meet the desired performance goals, in one hand, and on the other, operate in an economic manner to reduce the operational cost. Moreover, cloud applications are attractive target of security threats such as distributed denial of service attacks that target the availability of applications and increase the cost. Given such circumstances, it is vital to engineer applications that are able to self-adapt to such volatile conditions. In this thesis, we investigate techniques and mechanisms to engineer model-based application autonomic management systems that strive to meet performance, cost and security objectives of software systems running in cloud. In addition to using the elasticity feature of cloud, our proposed autonomic management systems employ run-time network adaptations using the emerging software defined networking and network function virtualization. We propose a novel approach to self-protecting applications where the application traffic is dynamically managed between public and private cloud depending on the condition of the traffic. Our management approach is able to adapt the bandwidth rates of application traffic to meet performance and cost objectives. Through run-time performance models as well as optimization, the management system maximizes the profit each time the application requires to adapt. Our autonomous management solutions are implemented and evaluated analytically as well as on multiple public and community clouds to demonstrate their applicability and effectiveness in real world environment

    A prediction-based model for consistent adaptive routing in back-bone networks at extreme situations

    Get PDF
    To reduce congestion, numerous routing solutions have been proposed for backbone networks, but how to select paths that stay consistently optimal for a long time in extremely congested situations, avoiding the unnecessary path reroutings, has not yet been investigated much. To solve that issue, a model that can measure the consistency of path latency difference is needed. In this paper, we make a humble step towards a consistent differential path latency model and by predicting base on that model, a metric Path Swap Indicator (PSI) is proposed. By learning the history latency of all optional paths, PSI is able to predict the onset of an obvious and steady channel deterioration and make the decision to switch paths. The effect of PSI is evaluated from the following aspects: (1) the consistency of the path selected, by measuring the time interval between PSI changes; (2) the accuracy of the channel congestion situation prediction; and (3) the improvement of the congestion situation. Experiments were carried out on a testbed using real-life Abilene traffic datasets collected at different times and locations. Results show that the proposed PSI can stay consistent for over 1000 s on average, and more than 3000 s at the longest in our experiment, while at the same time achieving a congestion situation improvement of more than 300% on average, and more than 200% at the least. It is evident that the proposed PSI metric is able to provide a consistent channel congestion prediction with satisfiable channel improvement at the same time. The results also demonstrate how different parameter values impact the result, both in terms of prediction consistency and the congestion improvement
    corecore