6 research outputs found
Traffic Management Applications for Stateful SDN Data Plane
The successful OpenFlow approach to Software Defined Networking (SDN) allows
network programmability through a central controller able to orchestrate a set
of dumb switches. However, the simple match/action abstraction of OpenFlow
switches constrains the evolution of the forwarding rules to be fully managed
by the controller. This can be particularly limiting for a number of
applications that are affected by the delay of the slow control path, like
traffic management applications. Some recent proposals are pushing toward an
evolution of the OpenFlow abstraction to enable the evolution of forwarding
policies directly in the data plane based on state machines and local events.
In this paper, we present two traffic management applications that exploit a
stateful data plane and their prototype implementation based on OpenState, an
OpenFlow evolution that we recently proposed.Comment: 6 pages, 9 figure
SPIDER: Fault Resilient SDN Pipeline with Recovery Delay Guarantees
When dealing with node or link failures in Software Defined Networking (SDN),
the network capability to establish an alternative path depends on controller
reachability and on the round trip times (RTTs) between controller and involved
switches. Moreover, current SDN data plane abstractions for failure detection
(e.g. OpenFlow "Fast-failover") do not allow programmers to tweak switches'
detection mechanism, thus leaving SDN operators still relying on proprietary
management interfaces (when available) to achieve guaranteed detection and
recovery delays. We propose SPIDER, an OpenFlow-like pipeline design that
provides i) a detection mechanism based on switches' periodic link probing and
ii) fast reroute of traffic flows even in case of distant failures, regardless
of controller availability. SPIDER can be implemented using stateful data plane
abstractions such as OpenState or Open vSwitch, and it offers guaranteed short
(i.e. ms) failure detection and recovery delays, with a configurable trade off
between overhead and failover responsiveness. We present here the SPIDER
pipeline design, behavioral model, and analysis on flow tables' memory impact.
We also implemented and experimentally validated SPIDER using OpenState (an
OpenFlow 1.3 extension for stateful packet processing), showing numerical
results on its performance in terms of recovery latency and packet losses.Comment: 8 page
Towards approximate fair bandwidth sharing via dynamic priority queuing
We tackle the problem of a network switch enforcing fair bandwidth sharing of the same link among many TCP-like senders. Most of the mechanisms to solve this problem are based on complex scheduling algorithms, whose feasibility becomes very expensive with today's line rate requirements, i.e. 10-100 Gbit/s per port. We propose a new scheme called FDPA in which we do not modify the scheduler, but instead we use an array of rate estimators to dynamically assign traffic flows to an existing strict priority scheduler serving only few queues. FDPA is inspired by recent advances in programmable stateful data planes. We propose a design that uses primitives common in data plane abstractions such as P4 and OpenFlow. We conducted experiments on a physical 10 Gbit/s testbed, we present preliminary results showing that FDPA produces fairness comparable to approaches based on scheduling
Analyzing performance of openstate in software defined network with multiple failures scenarios
Software Defined Network (SDN) is an emerging network that decouples the control plane and data planes. Like other networks, SDN undergoes a recovery process upon occurrences of link or node failures. Openflow is considered as the popular standard used in SDN. In Openflow, the process of detecting the failure and communications with controller to recompute alternative path result to long recovery time. However, there is limit with regards time taken to recover from the failures. If it takes more than 50 msec, a lot of packet will be lost, and communication overhead and Round Trip Time (RTT) between switch – controller may be high. Openstate is an Openflow extension that allows a programmer to specify how forwarding rules should be adapted in a stateful fashion. Openstate has been tested only on single failure. This research conduct experiment based on Openstate pipeline design that provides detections mechanism based on switches periodic link probing and fast reroute of traffic flow even when controller is not reachable. In this research, the experiments use Mininet simulation software to analyse and evaluate the performance of Openstate with multiple failure scenarios. The research has compared Overhead communication, Round Trip Time (RTT) between switch – controller and number of packet loss with Openflow and Openstate. On the average, in Openstate packet loss is zero when the recovery time is less than or equal to 70 msec while communication overhead involves 60 packet-in. In Openflow, packet loss is zero when the recovery time is less than or equal to 85 msec while communication overhead involves 100 packet-in. Finally, the average RTTs for Openstate and Openflow are 65 msec and 90 msec respectively. Based on the results obtained, it can be concluded that Openstate has better performance compare to Openflow
Self-healing and SDN: bridging the gap
Achieving high programmability has become an essential aim of network research due to the ever-increasing internet traffic. Software-Defined Network (SDN) is an emerging architecture aimed to address this need. However, maintaining accurate knowledge of the network after a failure is one of the largest challenges in the SDN. Motivated by this reality, this paper focuses on the use of self-healing properties to boost the SDN robustness. This approach, unlike traditional schemes, is not based on proactively configuring multiple (and memory-intensive) backup paths in each switch or performing a reactive and time-consuming routing computation at the controller level. Instead, the control paths are quickly recovered by local switch actions and subsequently optimized by global controller knowledge. Obtained results show that the proposed approach recovers the control topology effectively in terms of time and message load over a wide range of generated networks. Consequently, scalability issues of traditional fault recovery strategies are avoided.Postprint (published version
5G Network Slicing using SDN and NFV: A Survey of Taxonomy, Architectures and Future Challenges
In this paper, we provide a comprehensive review and updated solutions
related to 5G network slicing using SDN and NFV. Firstly, we present 5G service
quality and business requirements followed by a description of 5G network
softwarization and slicing paradigms including essential concepts, history and
different use cases. Secondly, we provide a tutorial of 5G network slicing
technology enablers including SDN, NFV, MEC, cloud/Fog computing, network
hypervisors, virtual machines & containers. Thidly, we comprehensively survey
different industrial initiatives and projects that are pushing forward the
adoption of SDN and NFV in accelerating 5G network slicing. A comparison of
various 5G architectural approaches in terms of practical implementations,
technology adoptions and deployment strategies is presented. Moreover, we
provide a discussion on various open source orchestrators and proof of concepts
representing industrial contribution. The work also investigates the
standardization efforts in 5G networks regarding network slicing and
softwarization. Additionally, the article presents the management and
orchestration of network slices in a single domain followed by a comprehensive
survey of management and orchestration approaches in 5G network slicing across
multiple domains while supporting multiple tenants. Furthermore, we highlight
the future challenges and research directions regarding network softwarization
and slicing using SDN and NFV in 5G networks.Comment: 40 Pages, 22 figures, published in computer networks (Open Access