39 research outputs found
Software-defined networking: guidelines for experimentation and validation in large-scale real world scenarios
Part 1: IIVC WorkshopInternational audienceThis article thoroughly details large-scale real world experiments using Software-Defined Networking in the testbed setup. More precisely, it provides a description of the foundation technology behind these experiments, which in turn is focused around OpenFlow and on the OFELIA testbed. In this testbed preliminary experiments were performed in order to tune up settings and procedures, analysing the encountered problems and their respective solutions. A methodology consisting of five large-scale experiments is proposed in order to properly validate and improve the evaluation techniques used in OpenFlow scenarios
SDN based testbeds for evaluating and promoting multipath TCP
Multipath TCP is an experimental transport proto-
col with remarkable recent past and non-negligible future poten-
tial. It has been standardized recently, however the evaluation
studies focus only on a limited set of isolated use-cases and
a comprehensive analysis or a feasible path of Internet-wide
adoption is still missing. This is mostly because in the current
networking practice it is unusual to configure multiple paths
between the endpoints of a connection. Therefore, conducting and
precisely controlling multipath experiments over the real “inter-
net” is a challenging task for some experimenters and impossible
for others. In this paper, we invoke SDN technology to make
this control possible and exploit large-scale internet testbeds to
conduct end-to-end MPTCP experiments. More specifically, we
establish a special purpose control and measurement framework
on top of two distinct internet testbeds. First, using the OpenFlow
support of GÉANT, we build a testbed enabling measurements
with real traffic. Second, we design and establish a publicly
available large-scale multipath capable measurement framework
on top of PlanetLab Europe and show the challenges of such
a system. Furthermore, we present measurements results with
MPTCP in both testbeds to get insight into its behavior in such
not well explored environment
Federation of Internet experimentation facilities: architecture and implementation
International audienceRealistic experimentation facilities are indispensable to accelerate the design of novel Future Internet systems. As many of these ground-breaking new applications and services cover multiple innovation areas, the need for these solutions to be tested on cross-domain facilities with both novel infrastructure technologies and newly emerging service platforms is rising. The Fed4FIRE project therefore aims at federatingotherwise isolated experimentation facilities in order to foster synergies between research communities. Currently the federation includes over 15 facilities from the Future Internet Research and Experiment (FIRE) initiative, covering wired, wireless and sensor networks, SDN and OpenFlow, cloud computing, smart city services,etc.This paper presents the architecture and implementation details of the federation, based on an extensive set of requirements coming from infrastructure owners, service providers and support communitie
Building the Future Internet through FIRE
The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate
An architecture for dynamic QoS management at Layer 2 for DOCSIS access networks using OpenFlow
Over the last few years, Software-Defined Networking (SDN) has emerged as one of the most disruptive and profitable novelties in networking. SDN was originally conceived to improve performance and reduce costs in Ethernet-based networks and it has been widely adopted in data center and campus networks. Similarly, thanks to the introduction of SDN concepts, access networks will benefit from the higher control, the lower maintenance costs and the better remote access to devices of SDN. However, its application to access networks is not straightforward and imposes great challenges to vendors and network operators, since current SDN technologies are not prepared to handle the provisioning of user equipment, specific port management or QoS requirements of common access networks. Most recent trends dealing with the SDN-ization of access networks advocate for the use of simple devices at the customer premises and the virtualization of the networking functionalities, requiring the provisioning of Layer 2 services in many cases. In such a scenario, this paper presents an architecture that brings SDN to common access networks using legacy equipment. In a nutshell, the architecture is based on the abstraction of the access network as a wide area OpenFlow switch where QoS-enabled pipes are dynamically created leveraging the high granularity of the OpenFlow protocol for packet classification. Furthermore, the OpenFlow protocol itself has been extended in order to support the advanced QoS requirements that are common to most access networks. The architecture has been implemented for DOCSIS access networks and it has been validated and evaluated using a real testbed deployed at our laboratory. The obtained results show that the architecture remains compliant with the ITU-T QoS recommendations and that the cost of introducing the elements required by the architecture in terms of service performance is negligible.European Commission, Seventh Framework Programme, through the ALIEN (317880) project
Spanish Ministry of Economy and Competitiveness under the Secure deployment of services over SDN and NFV based networks project S&NSEC TEC2013-47960-C4-3-
Building the Future Internet through FIRE
The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate
Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art
Software-Defined Networking (SDN) is an evolutionary networking paradigm
which has been adopted by large network and cloud providers, among which are
Tech Giants. However, embracing a new and futuristic paradigm as an alternative
to well-established and mature legacy networking paradigm requires a lot of
time along with considerable financial resources and technical expertise.
Consequently, many enterprises can not afford it. A compromise solution then is
a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN
functionalities are leveraged while existing traditional network
infrastructures are acknowledged. Recently, hSDN has been seen as a viable
networking solution for a diverse range of businesses and organizations.
Accordingly, the body of literature on hSDN research has improved remarkably.
On this account, we present this paper as a comprehensive state-of-the-art
survey which expands upon hSDN from many different perspectives
Recommended from our members
Seamless Application Delivery Using Software Defined Exchanges
One of the main challenges in delivering content over the Internet today is the absence of a centralized monitoring and control system [38]. Software Defined Networking has paved the way to provide a much needed control over network traffic. OpenFlow is now being standardized as part of the Open Networking Foundation, and Software Defined Exchanges (SDXes) provide a framework to use OpenFlow for multi-domain routing. Prototype deployments of Software Defined Exchanges have recently come into existence as a platform for Future Internet Architecture to eliminate the need for core routing technology used in today’s Internet. In this work, we look at how application delivery, in particular, Dynamic Adaptive Streaming over HTTP (DASH) and Nowcasting take advantage of a Software Defined Exchange. We compare unsophisticated controllers to more sophisticated ones which we call a ”load balancer” and find that implementing a good controller for inter-domain routing can result in better network utilization and application performance. We then design, develop and evaluate a prototype for a Content Distribution Network (CDN) that uses resources at SDXes to provide higher quality bitrates for a DASH client
OpenCache:a content delivery platform for the modern internet
Since its inception, the World Wide Web has revolutionised the way we share information, keep in touch with each other and consume content. In the latter case, it is now used by thousands of simultaneous users to consume video, surpassing physical media as the primary means of distribution. With the rise of on-demand services and more recently, high-definition media, this popularity has not waned. To support this consumption, the underlying infrastructure has been forced to evolve at a rapid pace. This includes the technology and mechanisms to facilitate the transmission of video, which are now offered at varying levels of quality and resolution. Content delivery networks are often deployed in order to scale the distribution provision. These vary in nature and design; from third-party providers running entirely as a service to others, to in-house solutions owned by the content service providers themselves. However, recent innovations in networking and virtualisation, namely Software Defined Networking and Network Function Virtualisation, have paved the way for new content delivery infrastructure designs. In this thesis, we discuss the motivation behind OpenCache, a next-generation content delivery platform. We examine how we can leverage these emerging technologies to provide a more flexible and scalable solution to content delivery. This includes analysing the feasibility of novel redirection techniques, and how these compare to existing means. We also investigate the creation of a unified interface from which a platform can be precisely controlled, allowing new applications to be created that operate in harmony with the infrastructure provision. Developments in distributed virtualisation platforms also enables functionality to be spread throughout a network, influencing the design of OpenCache. Through a prototype implementation, we evaluate each of these facets in a number of different scenarios, made possible through deployment on large-scale testbeds