822 research outputs found
Future Trends and Challenges for Mobile and Convergent Networks
Some traffic characteristics like real-time, location-based, and
community-inspired, as well as the exponential increase on the data traffic in
mobile networks, are challenging the academia and standardization communities
to manage these networks in completely novel and intelligent ways, otherwise,
current network infrastructures can not offer a connection service with an
acceptable quality for both emergent traffic demand and application requisites.
In this way, a very relevant research problem that needs to be addressed is how
a heterogeneous wireless access infrastructure should be controlled to offer a
network access with a proper level of quality for diverse flows ending at
multi-mode devices in mobile scenarios. The current chapter reviews recent
research and standardization work developed under the most used wireless access
technologies and mobile access proposals. It comprehensively outlines the
impact on the deployment of those technologies in future networking
environments, not only on the network performance but also in how the most
important requirements of several relevant players, such as, content providers,
network operators, and users/terminals can be addressed. Finally, the chapter
concludes referring the most notable aspects in how the environment of future
networks are expected to evolve like technology convergence, service
convergence, terminal convergence, market convergence, environmental awareness,
energy-efficiency, self-organized and intelligent infrastructure, as well as
the most important functional requisites to be addressed through that
infrastructure such as flow mobility, data offloading, load balancing and
vertical multihoming.Comment: In book 4G & Beyond: The Convergence of Networks, Devices and
Services, Nova Science Publishers, 201
Performance Improvement of Multicommodity Flow of Tactile and Best Effort Packet in Internet Network
Datacenter Traffic Control: Understanding Techniques and Trade-offs
Datacenters provide cost-effective and flexible access to scalable compute
and storage resources necessary for today's cloud computing needs. A typical
datacenter is made up of thousands of servers connected with a large network
and usually managed by one operator. To provide quality access to the variety
of applications and services hosted on datacenters and maximize performance, it
deems necessary to use datacenter networks effectively and efficiently.
Datacenter traffic is often a mix of several classes with different priorities
and requirements. This includes user-generated interactive traffic, traffic
with deadlines, and long-running traffic. To this end, custom transport
protocols and traffic management techniques have been developed to improve
datacenter network performance.
In this tutorial paper, we review the general architecture of datacenter
networks, various topologies proposed for them, their traffic properties,
general traffic control challenges in datacenters and general traffic control
objectives. The purpose of this paper is to bring out the important
characteristics of traffic control in datacenters and not to survey all
existing solutions (as it is virtually impossible due to massive body of
existing research). We hope to provide readers with a wide range of options and
factors while considering a variety of traffic control mechanisms. We discuss
various characteristics of datacenter traffic control including management
schemes, transmission control, traffic shaping, prioritization, load balancing,
multipathing, and traffic scheduling. Next, we point to several open challenges
as well as new and interesting networking paradigms. At the end of this paper,
we briefly review inter-datacenter networks that connect geographically
dispersed datacenters which have been receiving increasing attention recently
and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art
Software-Defined Networking (SDN) is an evolutionary networking paradigm
which has been adopted by large network and cloud providers, among which are
Tech Giants. However, embracing a new and futuristic paradigm as an alternative
to well-established and mature legacy networking paradigm requires a lot of
time along with considerable financial resources and technical expertise.
Consequently, many enterprises can not afford it. A compromise solution then is
a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN
functionalities are leveraged while existing traditional network
infrastructures are acknowledged. Recently, hSDN has been seen as a viable
networking solution for a diverse range of businesses and organizations.
Accordingly, the body of literature on hSDN research has improved remarkably.
On this account, we present this paper as a comprehensive state-of-the-art
survey which expands upon hSDN from many different perspectives
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Telecommunications Networks
This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing
A consistent and fault-tolerant data store for software defined networks
Tese de mestrado em Segurança InformĂĄtica, apresentada Ă Universidade de Lisboa, atravĂ©s da Faculdade de CiĂȘncias, 2013O sucesso da Internet Ă© indiscutĂvel. No entanto, desde hĂĄ muito tempo que sĂŁo feitas sĂ©rias crĂticas Ă sua arquitectura. Investigadores acreditam que o principal problema dessa arquitectura reside no facto de os dispositivos de rede incorporarem funçÔes distintas e complexas que vĂŁo alĂ©m do objectivo de encaminhar pacotes, para o qual foram criados [1]. O melhor exemplo disso sĂŁo os protocolos distribuĂdos (e complexos) de encaminhamento, que os routers executam de forma a conseguir garantir o encaminhamento de pacotes. Algumas das consequĂȘncias disso sĂŁo a complexidade das redes tradicionais tanto em termos de inovação como de manutenção. Como resultado, temos redes dispendiosas e pouco resilientes. De forma a resolver este problema uma arquitectura de rede diferente tem vindo a ser adoptada, tanto pela comunidade cientĂfica como pela indĂșstria. Nestas novas redes, conhecidas como Software Defined Networks (SDN), hĂĄ uma separação fĂsica entre o plano de controlo do plano de dados. Isto Ă©, toda a lĂłgica e estado de controlo da rede Ă© retirada dos dispositivos de rede, para passar a ser executada num controlador logicamente centralizado que com uma visĂŁo global, lĂłgica e coerente da rede, consegue controlar a mesma de forma dinĂąmica. Com esta delegação de funçÔes para o controlador os dispositivos de rede podem dedicar-se exclusivamente Ă sua função essencial de encaminhar pacotes de dados. Assim sendo, os dipositivos de redes permanecem simples e mais baratos, e o controlador pode implementar funçÔes de controlo simplificadas (e possivelmente mais eficazes) graças Ă visĂŁo global da rede. No entanto um modelo de programação logicamente centralizado nĂŁo implica um sistema centralizado. De facto, a necessidade de garantir nĂveis adequados de performance, escalabilidade e resiliĂȘncia, proĂbem que o plano de controlo seja centralizado. Em vez disso, as redes de SDN que operam a nĂvel de produção utilizam planos de controlo distribuĂdos e os arquitectos destes sistemas tĂȘm que enfrentar os trade-offs fundamentais associados a sistemas distribuĂdos. Nomeadamente o equilĂbrio adequado entre coerĂȘncia e disponibilidade do sistema. Neste trabalho nĂłs propomos uma arquitectura de um controlador distribuĂdo, tolerante a faltas e coerente. O elemento central desta arquitectura Ă© uma base de dados replicada e tolerante a faltas que mantĂ©m o estado da rede coerente, de forma a garantir que as aplicaçÔes de controlo da rede, que residem no controlador, possam operar com base numa visĂŁo coerente da rede que garanta coordenação, e consequentemente simplifique o desenvolvimento das aplicaçÔes. A desvantagem desta abordagem reflecte-se no decrĂ©scimo de performance, que limita a capacidade de resposta do controlador, e tambĂ©m a escalabilidade do mesmo. Mesmo assumindo estas consequĂȘncias, uma conclusĂŁo importante do nosso estudo Ă© que Ă© possĂvel atingir os objectivos propostos (i.e., coerĂȘncia forte e tolerĂąncia a faltas) e manter a performance a um nĂvel aceitĂĄvel para determinados tipo de redes. Relativamente Ă tolerĂąncia a faltas, numa arquitectura SDN estas podem ocorrer em trĂȘs domĂnios diferentes: o plano de dados (falhas do equipamento de rede), o plano de controlo (falhas da ligação entre o controlador e o equipamento de rede) e, finalmente, o prĂłprio controlador. Este Ășltimo Ă© de uma importĂąncia particular, sendo que a falha do mesmo pode perturbar a rede por inteiro (i.e., deixando de existir conectividade entre os hosts). Ă portanto essencial que as redes de SDN que operam a nĂvel de produção possuam mecanismos que possam lidar com os vĂĄrios tipos de faltas e garantir disponibilidade perto de 100%. O trabalho recente em SDN tĂȘm explorado a questĂŁo da coerĂȘncia a nĂveis diferentes. Linguagens de programação como a Frenetic [2] oferecem coerĂȘncia na composição de polĂticas de rede, conseguindo resolver incoerĂȘncias nas regras de encaminhamento automaticamente. Outra linha de trabalho relacionado propĂ”e abstracçÔes que garantem a coerĂȘncia da rede durante a alteração das tabelas de encaminhamento do equipamento. O objectivo destes dois trabalhos Ă© garantir a coerĂȘncia depois de decidida a polĂtica de encaminhamento. O Onix (um controlador de SDN muitas vezes referenciado [3]) garante um
tipo de coerĂȘncia diferente: uma que Ă© importante antes da polĂtica de encaminhamento ser tomada. Este controlador oferece dois tipos de coerĂȘncia na salvaguarda do estado da rede: coerĂȘncia eventual, e coerĂȘncia forte. O nosso trabalho utiliza apenas coerĂȘncia forte, e consegue demonstrar que esta pode ser garantida com uma performance superior Ă garantida pelo Onix. Actualmente, os controladores de SDN distribuĂdos (Onix e HyperFlow [4]) utilizam
modelos de distribuição nĂŁo transparentes, com propriedades fracas como coerĂȘncia eventual que exigem maior cuidado no desenvolvimento de aplicaçÔes de controlo de rede no controlador. Isto deve-se Ă ideia (do nosso ponto de vista infundada) de que propriedades como coerĂȘncia forte limitam significativamente a escalabilidade do controlador. No entanto um controlador com coerĂȘncia forte traduz-se num modelo de programação mais simples e transparente Ă distribuição do controlador. Neste trabalho nĂłs argumentĂĄmos que Ă© possĂvel utilizar tĂ©cnicas bem conhecidas de replicação baseadas na mĂĄquina de estados distribuĂda [5], para construir um controlador SDN, que nĂŁo sĂł garante tolerĂąncia a faltas e coerĂȘncia forte, mas tambĂ©m o faz com uma performance aceitĂĄvel. Neste sentido a principal contribuição desta dissertação Ă© mostrar que uma base de dados construĂda com as tĂ©cnicas mencionadas anteriormente (como as providenciadas pelo BFT-SMaRt [6]), e integrada com um controlador open-source existente (como o Floodlight1), consegue lidar com vĂĄrios tipos de carga, provenientes de aplicaçÔes de controlo de rede, eficientemente. As contribuiçÔes principais do nosso trabalho, podem ser resumidas em: 1. A proposta de uma arquitectura de um controlador distribuĂdo baseado nas propriedades de coerĂȘncia forte e tolerĂąncia a faltas; 2. Como a arquitectura proposta Ă© baseada numa base de dados replicada, nĂłs realizamos um estudo da carga produzida por trĂȘs aplicaçÔes na base dados. 3. Para avaliar a viabilidade da nossa arquitectura nĂłs analisamos a capacidade do middleware de replicação para processar a carga mencionada no ponto anterior. Este estudo descobre as seguintes variĂĄveis: (a) Quantos eventos por segundo consegue o middleware processar por segundo; (b) Qual o impacto de tempo (i.e., latĂȘncia) necessĂĄrio para processar tais eventos; para cada uma das aplicaçÔes mencionadas, e para cada um dos possĂveis eventos de rede processados por essas aplicaçÔes. Estas duas variĂĄveis sĂŁo importantes para entender a escalabilidade e performance da arquitectura proposta. Do nosso trabalho, nomeadamente do nosso estudo da carga das aplicaçÔes (numa primeira versĂŁo da nossa integração com a base de dados) e da capacidade do middleware resultou uma publicação: FĂĄbio Botelho, Fernando Ramos, Diego Kreutz and Alysson Bessani; On the feasibility of a consistent and fault-tolerant data store for SDNs, in Second European Workshop on Software Defined Networks, Berlin, October 2013. Entretanto, nĂłs submetemos esta dissertação cerca de cinco meses depois desse artigo, e portanto, contĂ©m um estudo muito mais apurado e melhorado.Even if traditional data networks are very successful, they exhibit considerable complexity manifested in the configuration of network devices, and development of network protocols.
Researchers argue that this complexity derives from the fact that network devices are responsible for both processing control functions such as distributed routing protocols and forwarding packets. This work is motivated by the emergent network architecture of Software Defined Networks where the control functionality is removed from the network devices and delegated to a server (usually called controller) that is responsible for dynamically configuring the network devices present in the infrastructure. The controller has the advantage of logically
centralizing the network state in contrast to the previous model where state was distributed across the network devices. Despite of this logical centralization, the control plane (where the controller operates) must be distributed in order to avoid being a single point of failure. However, this distribution introduces several challenges due to the heterogeneous, asynchronous, and faulty environment where the controller operates. Current distributed controllers lack transparency due to the eventual consistency properties employed in the distribution of the controller. This results in a complex programming model for the development of network control applications. This work proposes a fault-tolerant distributed controller with strong consistency properties that allows a transparent distribution of the control plane. The drawback of this approach is the increase in overhead and delay, which limits responsiveness and scalability. However, despite being fault-tolerant and strongly consistent, we show that this controller is able to provide performance results (in some cases) superior to those available in the literature
- âŠ