129 research outputs found
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
A review of the use of artificial intelligence methods in infrastructure systems
The artificial intelligence (AI) revolution offers significant opportunities to capitalise on the growth of digitalisation and has the potential to enable the âsystem of systemsâ approach required in increasingly complex infrastructure systems. This paper reviews the extent to which research in economic infrastructure sectors has engaged with fields of AI, to investigate the specific AI methods chosen and the purposes to which they have been applied both within and across sectors. Machine learning is found to dominate the research in this field, with methods such as artificial neural networks, support vector machines, and random forests among the most popular. The automated reasoning technique of fuzzy logic has also seen widespread use, due to its ability to incorporate uncertainties in input variables. Across the infrastructure sectors of energy, water and wastewater, transport, and telecommunications, the main purposes to which AI has been applied are network provision, forecasting, routing, maintenance and security, and network quality management. The data-driven nature of AI offers significant flexibility, and work has been conducted across a range of network sizes and at different temporal and geographic scales. However, there remains a lack of integration of planning and policy concerns, such as stakeholder engagement and quantitative feasibility assessment, and the majority of research focuses on a specific type of infrastructure, with an absence of work beyond individual economic sectors. To enable solutions to be implemented into real-world infrastructure systems, research will need to move away from a siloed perspective and adopt a more interdisciplinary perspective that considers the increasing interconnectedness of these systems
A Cognitive Routing framework for Self-Organised Knowledge Defined Networks
This study investigates the applicability of machine learning methods to the routing protocols for achieving rapid convergence in self-organized knowledge-defined networks. The research explores the constituents of the Self-Organized Networking (SON) paradigm for 5G and beyond, aiming to design a routing protocol that complies with the SON requirements. Further, it also exploits a contemporary discipline called Knowledge-Defined Networking (KDN) to extend the routing capability by calculating the âMost Reliableâ path than the shortest one.
The research identifies the potential key areas and possible techniques to meet the objectives by surveying the state-of-the-art of the relevant fields, such as QoS aware routing, Hybrid SDN architectures, intelligent routing models, and service migration techniques. The design phase focuses primarily on the mathematical modelling of the routing problem and approaches the solution by optimizing at the structural level. The work contributes Stochastic Temporal Edge Normalization (STEN) technique which fuses link and node utilization for cost calculation; MRoute, a hybrid routing algorithm for SDN that leverages STEN to provide constant-time convergence; Most Reliable Route First (MRRF) that uses a Recurrent Neural Network (RNN) to approximate route-reliability as the metric of MRRF. Additionally, the research outcomes include a cross-platform SDN Integration framework (SDN-SIM) and a secure migration technique for containerized services in a Multi-access Edge Computing
environment using Distributed Ledger Technology.
The research work now eyes the development of 6G standards and its compliance with Industry-5.0 for enhancing the abilities of the present outcomes in the light of Deep Reinforcement Learning and Quantum Computing
Optimizing Gradual SDN Upgrades in ISP Networks
Nowadays, there is a fast-paced shift from legacy telecommunication systems to novel software-defined network (SDN) architectures that can support on-the-fly network reconfiguration, therefore, empowering advanced traffic engineering mechanisms. Despite this momentum, migration to SDN cannot be realized at once especially in high-end networks of Internet service providers (ISPs). It is expected that ISPs will gradually upgrade their networks to SDN over a period that spans several years. In this paper, we study the SDN upgrading problem in an ISP network: which nodes to upgrade and when we consider a general model that captures different migration costs and network topologies, and two plausible ISP objectives: 1) the maximization of the traffic that traverses at least one SDN node, and 2) the maximization of the number of dynamically selectable routing paths enabled by SDN nodes. We leverage the theory of submodular and supermodular functions to devise algorithms with provable approximation ratios for each objective. Using real-world network topologies and traffic matrices, we evaluate the performance of our algorithms and show up to 54% gains over state-of-the-art methods. Moreover, we describe the interplay between the two objectives; maximizing one may cause a factor of 2 loss to the other. We also study the dual upgrading problem, i.e., minimizing the upgrading cost for the ISP while ensuring specific performance goals. Our analysis shows that our proposed algorithm can achieve up to 2.5 times lower cost to ensure performance goals over state-of-the-art methods.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe
Recommended from our members
Threat Landscape and Good Practice Guide for Software Defined Networks/5G
5G represents the next major phase of mobile telecommunication systems and network architectures beyond the current 4G standards, aiming at extreme broadband and ultra-robust, low latency connectivity, to enable the programmable connectivity for the Internet of Everything2. Despite the significant debate on the technical specifications and the technological maturity of 5G, which are under discussion in various fora3, 5G is expected to affect positively and significantly several industry sectors ranging from ICT to industry sectors such as car and other manufacturing, health and agriculture in the period up to and beyond 2020. 5G will be driven by the influence of software on network functions, known as Software Defined Networking (SDN) and Network Function Virtualization (NFV). The key concept that underpins SDN is the logical centralization of network control functions by decoupling the control and packet forwarding functionality of the network. NFV complements this vision through the virtualization of these functionalities based on recent advances in general server and enterprise IT virtualization. Considering the technological maturity of the technologies that 5G can leverage on, SDN is the one that is moving faster from development to production. To realize the business potential of SDN/5G, a number of technical issues related to the design and operation of Software Defined Networks need to be addressed. Amongst them, SDN/5G security is one of the key issues, that needs to be addressed comprehensively in order to avoid missing the business opportunities arising from SDN/5G. In this report, we review threats and potential compromises related to the security of SDN/5G networks. More specifically, this report contains a review of the emerging threat landscape of 5G networks with particular focus on Software Defined Networking. It also considers security of NFV and radio network access. To provide a comprehensive account of the emerging threat SDN/5G landscape, this report has identified related network assets and the security threats, challenges and risks arising for these assets. Driven by the identified threats and risks, this report has also reviewed and identified existing security mechanisms and good practices for SDN/5G/NFV, and based on these it has analysed gaps and provided technical, policy and organizational recommendations for proactively enhancing the security of SDN/5G
A DDoS Attack Detection and Mitigation with Software-Defined Internet of Things Framework
With the spread of Internet of Things' (IoT) applications, security has become extremely important. A recent distributed denial-of-service (DDoS) attack revealed the ubiquity of vulnerabilities in IoT, and many IoT devices unwittingly contributed to the DDoS attack. The emerging software-defined anything (SDx) paradigm provides a way to safely manage IoT devices. In this paper, we first present a general framework for software-defined Internet of Things (SD-IoT) based on the SDx paradigm. The proposed framework consists of a controller pool containing SD-IoT controllers, SD-IoT switches integrated with an IoT gateway, and IoT devices. We then propose an algorithm for detecting and mitigating DDoS attacks using the proposed SD-IoT framework, and in the proposed algorithm, the cosine similarity of the vectors of the packet-in message rate at boundary SD-IoT switch ports is used to determine whether DDoS attacks occur in the IoT. Finally, experimental results show that the proposed algorithm has good performance, and the proposed framework adapts to strengthen the security of the IoT with heterogeneous and vulnerable devices
A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics
A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques
Recommended from our members
Abstractions and optimisations for model-checking software-defined networks
Software-Defined Networking introduces a new programmatic abstraction layer by shifting the distributed network functions (NFs) from silicon chips (ASICs) to a logically centralized (controller) program. And yet, controller programs are a common source of bugs that can cause performance degradation, security exploits and poor reliability in networks. Assuring that a controller program satisfies the specifications is thus most preferable, yet the size of the network and the complexity of the controller makes this a challenging effort.
This thesis presents a highly expressive, optimised SDN model, (code-named MoCS), that can be reasoned about and verified formally in an acceptable timeframe. In it, we introduce reusable abstractions that (i) come with a rich semantics, for capturing subtle real-world bugs that are hard to track down, and (ii) which are formally proved correct. In addition, MoCS deals with timeouts of flow table entries, thus supporting automatic state refresh (soft state) in the network. The optimisations are achieved by (1) contextually analysing the model for possible partial order reductions in view of the concrete control program, network topology and specification property in question, (2) pre-computing packet equivalence classes and (3) indexing packets and rules that exist in the model and bit-packing (compressing) them.
Each of these developments is demonstrated by a set of real-world controller programs that have been implemented in network topologies of varying size, and publicly released under an open-source license
- âŠ