129 research outputs found

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    A review of the use of artificial intelligence methods in infrastructure systems

    Get PDF
    The artificial intelligence (AI) revolution offers significant opportunities to capitalise on the growth of digitalisation and has the potential to enable the ‘system of systems’ approach required in increasingly complex infrastructure systems. This paper reviews the extent to which research in economic infrastructure sectors has engaged with fields of AI, to investigate the specific AI methods chosen and the purposes to which they have been applied both within and across sectors. Machine learning is found to dominate the research in this field, with methods such as artificial neural networks, support vector machines, and random forests among the most popular. The automated reasoning technique of fuzzy logic has also seen widespread use, due to its ability to incorporate uncertainties in input variables. Across the infrastructure sectors of energy, water and wastewater, transport, and telecommunications, the main purposes to which AI has been applied are network provision, forecasting, routing, maintenance and security, and network quality management. The data-driven nature of AI offers significant flexibility, and work has been conducted across a range of network sizes and at different temporal and geographic scales. However, there remains a lack of integration of planning and policy concerns, such as stakeholder engagement and quantitative feasibility assessment, and the majority of research focuses on a specific type of infrastructure, with an absence of work beyond individual economic sectors. To enable solutions to be implemented into real-world infrastructure systems, research will need to move away from a siloed perspective and adopt a more interdisciplinary perspective that considers the increasing interconnectedness of these systems

    A Cognitive Routing framework for Self-Organised Knowledge Defined Networks

    Get PDF
    This study investigates the applicability of machine learning methods to the routing protocols for achieving rapid convergence in self-organized knowledge-defined networks. The research explores the constituents of the Self-Organized Networking (SON) paradigm for 5G and beyond, aiming to design a routing protocol that complies with the SON requirements. Further, it also exploits a contemporary discipline called Knowledge-Defined Networking (KDN) to extend the routing capability by calculating the “Most Reliable” path than the shortest one. The research identifies the potential key areas and possible techniques to meet the objectives by surveying the state-of-the-art of the relevant fields, such as QoS aware routing, Hybrid SDN architectures, intelligent routing models, and service migration techniques. The design phase focuses primarily on the mathematical modelling of the routing problem and approaches the solution by optimizing at the structural level. The work contributes Stochastic Temporal Edge Normalization (STEN) technique which fuses link and node utilization for cost calculation; MRoute, a hybrid routing algorithm for SDN that leverages STEN to provide constant-time convergence; Most Reliable Route First (MRRF) that uses a Recurrent Neural Network (RNN) to approximate route-reliability as the metric of MRRF. Additionally, the research outcomes include a cross-platform SDN Integration framework (SDN-SIM) and a secure migration technique for containerized services in a Multi-access Edge Computing environment using Distributed Ledger Technology. The research work now eyes the development of 6G standards and its compliance with Industry-5.0 for enhancing the abilities of the present outcomes in the light of Deep Reinforcement Learning and Quantum Computing

    Optimizing Gradual SDN Upgrades in ISP Networks

    Get PDF
    Nowadays, there is a fast-paced shift from legacy telecommunication systems to novel software-defined network (SDN) architectures that can support on-the-fly network reconfiguration, therefore, empowering advanced traffic engineering mechanisms. Despite this momentum, migration to SDN cannot be realized at once especially in high-end networks of Internet service providers (ISPs). It is expected that ISPs will gradually upgrade their networks to SDN over a period that spans several years. In this paper, we study the SDN upgrading problem in an ISP network: which nodes to upgrade and when we consider a general model that captures different migration costs and network topologies, and two plausible ISP objectives: 1) the maximization of the traffic that traverses at least one SDN node, and 2) the maximization of the number of dynamically selectable routing paths enabled by SDN nodes. We leverage the theory of submodular and supermodular functions to devise algorithms with provable approximation ratios for each objective. Using real-world network topologies and traffic matrices, we evaluate the performance of our algorithms and show up to 54% gains over state-of-the-art methods. Moreover, we describe the interplay between the two objectives; maximizing one may cause a factor of 2 loss to the other. We also study the dual upgrading problem, i.e., minimizing the upgrading cost for the ISP while ensuring specific performance goals. Our analysis shows that our proposed algorithm can achieve up to 2.5 times lower cost to ensure performance goals over state-of-the-art methods.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    A DDoS Attack Detection and Mitigation with Software-Defined Internet of Things Framework

    Get PDF
    With the spread of Internet of Things' (IoT) applications, security has become extremely important. A recent distributed denial-of-service (DDoS) attack revealed the ubiquity of vulnerabilities in IoT, and many IoT devices unwittingly contributed to the DDoS attack. The emerging software-defined anything (SDx) paradigm provides a way to safely manage IoT devices. In this paper, we first present a general framework for software-defined Internet of Things (SD-IoT) based on the SDx paradigm. The proposed framework consists of a controller pool containing SD-IoT controllers, SD-IoT switches integrated with an IoT gateway, and IoT devices. We then propose an algorithm for detecting and mitigating DDoS attacks using the proposed SD-IoT framework, and in the proposed algorithm, the cosine similarity of the vectors of the packet-in message rate at boundary SD-IoT switch ports is used to determine whether DDoS attacks occur in the IoT. Finally, experimental results show that the proposed algorithm has good performance, and the proposed framework adapts to strengthen the security of the IoT with heterogeneous and vulnerable devices

    A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics

    Get PDF
    A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques
    • 

    corecore