23 research outputs found

    Scalability evaluation of VPN technologies for secure container networking

    Get PDF
    For years, containers have been a popular choice for lightweight virtualization in the cloud. With the rise of more powerful and flexible edge devices, container deployment strategies have arisen that leverage the computational power of edge devices for optimal workload distribution. This move from a secure data center network to heterogenous public and private networks presents some issues in terms of security and network topology that can be partially solved by using a Virtual Private Network (VPN) to connect edge nodes to the cloud. In this paper, the scalability of VPN software is evaluated to determine if and how it can be used in large-scale clusters containing edge nodes. Benchmarks are performed to determine the maximum number of VPN-connected nodes and the influence of network degradation on VPN performance, primarily using traffic typical for edge devices generating IoT data. Some high level conclusions are drawn from the results, indicating that WireGuard is an excellent choice of VPN software to connect edge nodes in a cluster. Analysis of the results also shows the strengths and weaknesses of other VPN software

    Data-Driven Emulation of Mobile Access Networks

    Get PDF
    Network monitoring is fundamental to understand network evolution and behavior. However, monitoring studies have the main limitation of running new experiments when the phenomenon under analysis is over e.g., congestion. To overcome this limitation, network emulation is of vital importance for network testing and research experiments either in wired and mobile networks. When it comes to mobile networks, the variety of technical characteristics, coupled with the opaque network configurations, make realistic network emulation a challenging task. In this paper, we address this issue leveraging a large scale dataset composed of 500M network latency measurements in Mobile BroadBand networks. By using this dataset, we create 51 different network latency profiles based on the Mobile BroadBand operator, the radio access technology and signal strength. These profiles are then processed to make them compatible with the tc-netem emulation tool. Finally, we show that, despite the limitation of current tc-netem emulation tool, Generative Adversarial Networks are a promising solution used to create realistic temporal emulation. We believe that this work could be the first step toward a comprehensive data-driven network emulation. For this, we make our profiles and codes available to foster further studies in these directions

    AI-Enabled Traffic Control Prioritization in Software-Defined IoT Networks for Smart Agriculture

    Get PDF
    Smart agricultural systems have received a great deal of interest in recent years because of their potential for improving the efficiency and productivity of farming practices. These systems gather and analyze environmental data such as temperature, soil moisture, humidity, etc., using sensor networks and Internet of Things (IoT) devices. This information can then be utilized to improve crop growth, identify plant illnesses, and minimize water usage. However, dealing with data complexity and dynamism can be difficult when using traditional processing methods. As a solution to this, we offer a novel framework that combines Machine Learning (ML) with a Reinforcement Learning (RL) algorithm to optimize traffic routing inside Software-Defined Networks (SDN) through traffic classifications. ML models such as Logistic Regression (LR), Random Forest (RF), k-nearest Neighbours (KNN), Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT) are used to categorize data traffic into emergency, normal, and on-demand. The basic version of RL, i.e., the Q-learning (QL) algorithm, is utilized alongside the SDN paradigm to optimize routing based on traffic classes. It is worth mentioning that RF and DT outperform the other ML models in terms of accuracy. Our results illustrate the importance of the suggested technique in optimizing traffic routing in SDN environments. Integrating ML-based data classification with the QL method improves resource allocation, reduces latency, and improves the delivery of emergency traffic. The versatility of SDN facilitates the adaption of routing algorithms depending on real-time changes in network circumstances and traffic characteristics

    Blockchain-based secure authentication with improved performance for fog computing

    Get PDF
    Advancement in the Internet of Things (IoT) and cloud computing has escalated the number of connected edge devices in a smart city environment. Having billions more devices has contributed to security concerns, and an attack-proof authentication mechanism is the need of the hour to sustain the IoT environment. Securing all devices could be a huge task and require lots of computational power, and can be a bottleneck for devices with fewer computational resources. To improve the authentication mechanism, many researchers have proposed decentralized applications such as blockchain technology for securing fog and IoT environments. Ethereum is considered a popular blockchain platform and is used by researchers to implement the authentication mechanism due to its programable smart contract. In this research, we proposed a secure authentication mechanism with improved performance. Neo blockchain is a platform that has properties that can provide improved security and faster execution. The research utilizes the intrinsic properties of Neo blockchain to develop a secure authentication mechanism. The proposed authentication mechanism is compared with the existing algorithms and shows that the proposed mechanism is 20 to 90 per cent faster in execution time and has over 30 to 70 per cent decrease in registration and authentication when compared to existing methods

    Quality of Experience Experimentation Prediction Framework through Programmable Network Management

    Get PDF
    Quality of experience (QoE) metrics can be used to assess user perception and satisfaction in data services applications delivered over the Internet. End-to-end metrics are formed because QoE is dependent on both the users’ perception and the service used. Traditionally, network optimization has focused on improving network properties such as the quality of service (QoS). In this paper we examine adaptive streaming over a software-defined network environment. We aimed to evaluate and study the media streams, aspects affecting the stream, and the network. This was undertaken to eventually reach a stage of analysing the network’s features and their direct relationship with the perceived QoE. We then use machine learning to build a prediction model based on subjective user experiments. This will help to eliminate future physical experiments and automate the process of predicting QoE

    Big data workflows: Locality-aware orchestration using software containers

    Get PDF
    The emergence of the Edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing Big Data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the Edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric Big Data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo Workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.publishedVersio

    Big data workflows: Locality-aware orchestration using software containers

    Get PDF
    The emergence of the Edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing Big Data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the Edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric Big Data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo Workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.publishedVersio

    Quality of Experience Experimentation Prediction Framework through Programmable Network Management

    Get PDF
    Quality of experience (QoE) metrics can be used to assess user perception and satisfaction in data services applications delivered over the Internet. End-to-end metrics are formed because QoE is dependent on both the users’ perception and the service used. Traditionally, network optimization has focused on improving network properties such as the quality of service (QoS). In this paper we examine adaptive streaming over a software-defined network environment. We aimed to evaluate and study the media streams, aspects affecting the stream, and the network. This was undertaken to eventually reach a stage of analysing the network’s features and their direct relationship with the perceived QoE. We then use machine learning to build a prediction model based on subjective user experiments. This will help to eliminate future physical experiments and automate the process of predicting QoE
    corecore