71 research outputs found
All-path bridging: Path exploration as an efficient alternative to path computation in bridging standards
This work is at: IEEE International Conference on Communications: Second Workshop on Telecommunication Standards: From Research to Standards. In Communications Workshops (ICC). Date 9-13 June 2013, Budapest, Hungary.Link-state based routing protocols are dominant in Shortest Path Bridges (IEEE 802.1aq) and also at TRILL (IETF) Rbridges. Both standards propose a hybrid of switch and router adding a link state routing protocol in layer two that computes shortest paths between bridges. Surprisingly, path exploration mechanisms have not yet been considered at standardization bodies, in spite of some outstanding advantages: simplicity,instantaneous path adaptation to traffic load with load adaptive routing and low latency. We have developed All-path, a family of protocols based on simple path exploration mechanisms based on full flooding of a single frame, as an alternative to the "beatentrail" of path computation. Path exploration (either instantaneous or periodical, proactive or reactive) is an efficient alternative to path computation for bridged networks because the processing cost of address learning at bridges from broad cast frames is very low and Ethernet links provide very high link capacity so that the extra packet broad casts do not impact load significantly. Standardization groups should consider the application of path exploration (instantaneous or periodical, proactive or reactive) mechanisms in Audio Video Bridges and ingeneric bridging networks like campus and data centers to find redundant paths, low latency and load distributionThis work was supported in part by grants from Comunidad de Madrid
through Project MEDIANET-CM (S-2009/TIC-1468) .Publicad
Deliverable DJRA1.2. Solutions and protocols proposal for the network control, management and monitoring in a virtualized network context
This deliverable presents several research proposals for the FEDERICA network, in different subjects, such as monitoring, routing, signalling, resource discovery, and isolation. For each topic one or more possible solutions are elaborated, explaining the background, functioning and the implications of the proposed solutions.This deliverable goes further on the research aspects within FEDERICA. First of all the architecture of the control plane for the FEDERICA infrastructure will be defined. Several possibilities could be implemented, using the basic FEDERICA infrastructure as a starting point. The focus on this document is the intra-domain aspects of the control plane and their properties. Also some inter-domain aspects are addressed. The main objective of this deliverable is to lay great stress on creating and implementing the prototype/tool for the FEDERICA slice-oriented control system using the appropriate framework. This deliverable goes deeply into the definition of the containers between entities and their syntax, preparing this tool for the future implementation of any kind of algorithm related to the control plane, for both to apply UPB policies or to configure it by hand. We opt for an open solution despite the real time limitations that we could have (for instance, opening web services connexions or applying fast recovering mechanisms). The application being developed is the central element in the control plane, and additional features must be added to this application. This control plane, from the functionality point of view, is composed by several procedures that provide a reliable application and that include some mechanisms or algorithms to be able to discover and assign resources to the user. To achieve this, several topics must be researched in order to propose new protocols for the virtual infrastructure. The topics and necessary features covered in this document include resource discovery, resource allocation, signalling, routing, isolation and monitoring. All these topics must be researched in order to find a good solution for the FEDERICA network. Some of these algorithms have started to be analyzed and will be expanded in the next deliverable. Current standardization and existing solutions have been investigated in order to find a good solution for FEDERICA. Resource discovery is an important issue within the FEDERICA network, as manual resource discovery is no option, due to scalability requirement. Furthermore, no standardization exists, so knowledge must be obtained from related work. Ideally, the proposed solutions for these topics should not only be adequate specifically for this infrastructure, but could also be applied to other virtualized networks.Postprint (published version
Critical ethernet baseada em OpenFlow
Mestrado em Engenharia de Computadores e TelemáticaNowadays, we put an immense value on Ethernet networks, especially for
data center operations empowering cloud environments or huge network
infrastructures in general. However, it is not always possible to bring 100%
up-time communications since redundancy in Ethernet has always been an
unresolved problem, considering the large amount of network resources to be
managed. Through history there have been many developed solutions that
tried to solve this issue, only to fail in providing the proper support.
Software-defined Networking (SDN) is a novel paradigm and a dynamic
and configurable mechanism that brings a programmable nature for developers
to implement solutions that may finally solve the identified issues. Via
the use of programmable open interfaces, the control and management of
network behavior is becoming easier and less error prone.
The main objective of this dissertation was the implementation and evaluation
of a fail-safe SDN-based solution for critical communications, therefore for
fault management in redundant Ethernet technologies on a typical data center
management scenario.
This dissertation presents the developed solution and the main phases
of its implementations. The implemented solution uses a redundant L2
network and a SDN controller to calculate the network topology. The solution
makes use of extensions to both the OpenFlow protocol and OpenDaylight
controller’s modules.
During the evaluation stage, different scenarios were tested where topology
changes occur. The evaluation results show that the proposed solution
behaves satisfactorily whenever a link fails, obtaining none packet loss. To
conclude, the solution shows to be promising for critical data center operations
concerning the adaptation time obtained.Hoje em dia coloca-se um valor imenso em redes Ethernet, especialmente
para operações em data centers que fornecem serviços na cloud ou em
enormes infraestruturas de rede em geral. No entanto, nem sempre é possível
existir e garantir comunicações a 100% devido ao facto de a redundância
em Ethernet ter sido considerada sempre como um problema não resolvido,
tendo em conta a grande quantidade de recursos de rede a serem geridos.
Ao longo da história têm sido desenvolvidas diversas soluções que tentaram
resolver este problema, apenas para enfrentarem o falhanço em fornecer os
requisitos adequados.
Software-defined Networking (SDN) é um paradigma inovador e um
mecanismo dinâmico e configurável que traz uma natureza programável que
permite a implementação de soluções que possam, finalmente, resolver os
problemas identificados. Através do uso de interfaces abertas programáveis,
o controlo e gestão do comportamento da rede está a tornar-se mais fácil e
menos propenso a erros.
O objetivo principal desta dissertação foi a implementação e avaliação
de uma solução baseada em SDN à prova de falhas para comunicações
críticas, portanto para gestão de falhas em tecnologias Ethernet redundantes
num cenário típico de gestão de data centers.
Esta dissertação apresenta a solução desenvolvida e as principais fases
da sua implementação. A solução implementada utiliza uma rede
redundante L2 e um controlador SDN para calcular a topologia da rede. A
solução faz uso de extensões para o protocolo OpenFlow e módulos do
controlador OpenDaylight.
Durante a fase de avaliação, diferentes cenários foram testados onde
ocorreram mudanças na topologia. Os resultados da avaliação mostram
que a solução proposta se comporta de forma satisfatória sempre que uma
ligação falha, obtendo perda de pacotes nula. Para concluir, a solução
mostra-se promissora para as operações em data centers críticas tendo em
conta o tempo de adaptação obtido nas avaliações
Software defined utility: A step towards a flexible, reliable and low-cost smart grid
The Smart Grid relies in Information and Communication Technologies (ICT) but usually there is still a lack of integration in their deployment. They are designed as separated systems and managed that way too. In addition, the changes in the electric network are so complex and dependable on a very rigid hardware architecture. Based on the work done in the European project FINESCE, this paper presents the “Software Defined Utility “(SDU) concept, which advocates the migration of the utility infrastructure to software systems instead of relying on complex and rigid hardware based systems. This new approach provides a prospective view on the evolution of power systems that will benefit from software systems and high-speed data network infrastructures. More concretely, as a first SDU building block, the paper proposes a data storage and management system based on a hybrid cloud infrastructure to meet the storage requirements of electric utilities. In this regard, the following dimensions have been analysed: the most appropriate methodology to select where data resources should be allocated; security requirements and threads taking into account its deployment in a critical infrastructure like a Smart Grid
Implementing SDN into Computer Network Lessons
This paper describes the issue of introducing SDN to students of computer networks. The most important theoretical knowledge is summarized in the form of key points, students should know about. Practical experience is presented in the area of deployment of SDN in data centers with aim on connecting the existing knowledge from traditional computer networks. This connection is explained on problems of traditional networks in data centers and mitigation of these problems by using SDN. Learned information is then extended by presenting a practical demo application in Mininet environment. The application shows possible usage of SDN for making a data center more power-efficient. This application is put in context by the theory of power consumption of data center devices which can be significantly reduced if SDN are used. Purpose of the application is to motivate students to continue in research of SDN area
Estudos de técnicas multicaminhos em camada de enlace para novas demandas em redes de datacenters modernos
Nos últimos anos, as redes de datacenter passaram a experimentar uma mudança significativa no padrão do tráfego de dados. Requisitos como alta disponibilidade de banda e tolerância a falhas se tornaram indispensáveis, de modo que se fez necessário o surgimento de novos modelos de organização topológica dessas redes e novos protocolos de operação, em substituição aos tradicionais. Este apresenta um estudo sobre as mudanças observadas no padrão de tráfego das redes de datacenter e os problemas gerados pelo uso dos modelos e protocolos convencionais, assim como avaliar modelos e protocolos alternativos que visam contornar estes problemas e atender às novas demandas e requisitos desses ambientes. Realizou-se um estudo teórico e experimentos práticos em cenários específicos, de caráter comparativo, com o uso de protocolos propostos como solução e protocolos tradicionais, demonstrando a melhora na eficiência da operação da rede no contexto apresentado
Recommended from our members
Optimisation of a hadoop cluster based on SDN in cloud computing for big data applications
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonBig data has received a great deal attention from many sectors, including academia, industry and government. The Hadoop framework has emerged for supporting its storage and analysis using the MapReduce programming module. However, this framework is a complex system that has more than 150 parameters and some of them can exert a considerable effect on the performance of a Hadoop job. The optimum tuning of the Hadoop parameters is a difficult task as well as being time consuming. In this thesis, an optimisation approach is presented to improve the performance of a Hadoop framework by setting the values of the Hadoop parameters automatically. Specifically, genetic programming is used to construct a fitness function that represents the interrelations among the Hadoop parameters. Then, a genetic algorithm is employed to search for the optimum or near the optimum values of the Hadoop parameters. A Hadoop cluster is configured on two severe at Brunel University London to evaluate the performance of the proposed optimisation approach. The experimental results show that the performance of a Hadoop MapReduce job for 20 GB on Word Count Application is improved by 69.63% and 30.31% when compared to the default settings and state of the art, respectively. Whilst on Tera sort application, it is improved by 73.39% and 55.93%. For better optimisation, SDN is also employed to improve the performance of a Hadoop job. The experimental results show that the performance of a Hadoop job in SDN network for 50 GB is improved by 32.8% when compared to traditional network. Whilst on Tera sort application, the improvement for 50 GB is on average 38.7%. An effective computing platform is also presented in this thesis to support solar irradiation data analytics. It is built based on RHIPE to provide fast analysis and calculation for solar irradiation datasets. The performance of RHIPE is compared with the R language in terms of accuracy, scalability and speedup. The speed up of RHIPE is evaluated by Gustafson's Law, which is revised to enhance the performance of the parallel computation on intensive irradiation data sets in a cluster computing environment like Hadoop. The performance of the proposed work is evaluated using a Hadoop cluster based on the Microsoft azure cloud and the experimental results show that RHIPE provides considerable improvements over the R language. Finally, an effective routing algorithm based on SDN to improve the performance of a Hadoop job in a large scale cluster in a data centre network is presented. The proposed algorithm is used to improve the performance of a Hadoop job during the shuffle phase by allocating efficient paths for each shuffling flow, according to the network resources demand of each flow as well as their size and number. Furthermore, it is also employed to allocate alternative paths for each shuffling flow in the case of any link crashing or failure. This algorithm is evaluated by two network topologies, namely, fat tree and leaf-spine, built by EstiNet emulator software. The experimental results show that the proposed approach improves the performance of a Hadoop job in a data centre network.Ministry of Higher Education and Scientific Researc
- …