34 research outputs found

    Next Generation Network Routing and Control Plane

    Get PDF

    New Challenges in Quality of Services Control Architectures in Next Generation Networks

    Get PDF
    A mesura que Internet i les xarxes IP s'han anat integrant dins la societat i les corporacions, han anat creixent les expectatives de nous serveis convergents així com les expectatives de qualitat en les comunicacions. Les Next Generation Networks (NGN) donen resposta a les noves necessitats i representen el nou paradigma d'Internet a partir de la convergència IP. Un dels aspectes menys desenvolupats de les NGN és el control de la Qualitat del Servei (QoS), especialment crític en les comunicacions multimèdia a través de xarxes heterogènies i/o de diferents operadors. A més a més, les NGN incorporen nativament el protocol IPv6 que, malgrat les deficiències i esgotament d'adreces IPv4, encara no ha tingut l'impuls definitiu.Aquesta tesi està enfocada des d'un punt de vista pràctic. Així doncs, per tal de poder fer recerca sobre xarxes de proves (o testbeds) que suportin IPv6 amb garanties de funcionament, es fa un estudi en profunditat del protocol IPv6, del seu grau d'implementació i dels tests de conformància i interoperabilitat existents que avaluen la qualitat d'aquestes implementacions. A continuació s'avalua la qualitat de cinc sistemes operatius que suporten IPv6 mitjançant un test de conformància i s'implementa el testbed IPv6 bàsic, a partir del qual es farà la recerca, amb la implementació que ofereix més garanties.El QoS Broker és l'aportació principal d'aquesta tesi: un marc integrat que inclou un sistema automatitzat per gestionar el control de la QoS a través de sistemes multi-domini/multi-operador seguint les recomanacions de les NGN. El sistema automatitza els mecanismes associats a la configuració de la QoS dins d'un mateix domini (sistema autònom) mitjançant la gestió basada en polítiques de QoS i automatitza la negociació dinàmica de QoS entre QoS Brokers de diferents dominis, de forma que permet garantir QoS extrem-extrem sense fissures. Aquesta arquitectura es valida sobre un testbed de proves multi-domini que utilitza el mecanisme DiffServ de QoS i suporta IPv6.L'arquitectura definida en les NGN permet gestionar la QoS tant a nivell 3 (IP) com a nivell 2 (Ethernet, WiFi, etc.) de forma que permet gestionar també xarxes PLC. Aquesta tesi proposa una aproximació teòrica per aplicar aquesta arquitectura de control, mitjançant un QoS Broker, a les noves xarxes PLC que s'estan acabant d'estandarditzar, i discuteix les possibilitats d'aplicació sobre les futures xarxes de comunicació de les Smart Grids.Finalment, s'integra en el QoS Broker un mòdul per gestionar l'enginyeria del tràfic optimitzant els dominis mitjançant tècniques de intel·ligència artificial. La validació en simulacions i sobre un testbed amb routers Cisco demostra que els algorismes genètics híbrids són una opció eficaç en aquest camp.En general, les observacions i avenços assolits en aquesta tesi contribueixen a augmentar la comprensió del funcionament de la QoS en les NGN i a preparar aquests sistemes per afrontar problemes del món real de gran complexitat.A medida que Internet y las redes IP se han ido integrando dentro de la sociedad y las corporaciones, han ido creciendo las expectativas de nuevos servicios convergentes así como las expectativas de calidad en las comunicaciones. Las Next Generation Networks (NGN) dan respuesta a las nuevas necesidades y representan el nuevo paradigma de Internet a partir de la convergencia IP. Uno de los aspectos menos desarrollados de las NGN es el control de la Calidad del Servicio (QoS), especialmente crítico en las comunicaciones multimedia a través de redes heterogéneas y/o de diferentes operadores. Además, las NGN incorporan nativamente el protocolo IPv6 que, a pesar de las deficiencias y agotamiento de direcciones IPv4, aún no ha tenido el impulso definitivo.Esta tesis está enfocada desde un punto de vista práctico. Así pues, con tal de poder hacer investigación sobre redes de prueba (o testbeds) que suporten IPv6 con garantías de funcionamiento, se hace un estudio en profundidad del protocolo IPv6, de su grado de implementación y de los tests de conformancia e interoperabilidad existentes que evalúan la calidad de estas implementaciones. A continuación se evalua la calidad de cinco sistemas operativos que soportan IPv6 mediante un test de conformancia y se implementa el testbed IPv6 básico, a partir del cual se realizará la investigación, con la implementación que ofrece más garantías.El QoS Broker es la aportación principal de esta tesis: un marco integrado que incluye un sistema automatitzado para gestionar el control de la QoS a través de sistemas multi-dominio/multi-operador siguiendo las recomendaciones de las NGN. El sistema automatiza los mecanismos asociados a la configuración de la QoS dentro de un mismo dominio (sistema autónomo) mediante la gestión basada en políticas de QoS y automatiza la negociación dinámica de QoS entre QoS brokers de diferentes dominios, de forma que permite garantizar QoS extremo-extremo sin fisuras. Esta arquitectura se valida sobre un testbed de pruebas multi-dominio que utiliza el mecanismo DiffServ de QoS y soporta IPv6. La arquitectura definida en las NGN permite gestionar la QoS tanto a nivel 3 (IP) o como a nivel 2 (Ethernet, WiFi, etc.) de forma que permite gestionar también redes PLC. Esta tesis propone una aproximación teórica para aplicar esta arquitectura de control, mediante un QoS Broker, a las noves redes PLC que se están acabando de estandardizar, y discute las posibilidades de aplicación sobre las futuras redes de comunicación de las Smart Grids.Finalmente, se integra en el QoS Broker un módulo para gestionar la ingeniería del tráfico optimizando los dominios mediante técnicas de inteligencia artificial. La validación en simulaciones y sobre un testbed con routers Cisco demuestra que los algoritmos genéticos híbridos son una opción eficaz en este campo.En general, las observaciones y avances i avances alcanzados en esta tesis contribuyen a augmentar la comprensión del funcionamiento de la QoS en las NGN y en preparar estos sistemas para afrontar problemas del mundo real de gran complejidad.The steady growth of Internet along with the IP networks and their integration into society and corporations has brought with it increased expectations of new converged services as well as greater demands on quality in communications. The Next Generation Networks (NGNs) respond to these new needs and represent the new Internet paradigm from the IP convergence. One of the least developed aspects in the NGNs is the Quality of Service (QoS) control, which is especially critical in the multimedia communication through heterogeneous networks and/or different operators. Furthermore, the NGNs natively incorporate the IPv6 protocol which, despite its shortcomings and the depletion of IPv4 addresses has not been boosted yet.This thesis has been developed with a practical focus. Therefore, with the aim of carrying out research over testbeds supporting the IPv6 with performance guarantees, an in-depth study of the IPv6 protocol development has been conducted and its degree of implementation and the existing conformance and interoperability tests that evaluate these implementations have been studied. Next, the quality of five implementations has been evaluated through a conformance test and the basic IPv6 testbed has been implemented, from which the research will be carried out. The QoS Broker is the main contribution to this thesis: an integrated framework including an automated system for QoS control management through multi-domain/multi-operator systems according to NGN recommendations. The system automates the mechanisms associated to the QoS configuration inside the same domain (autonomous system) through policy-based management and automates the QoS dynamic negotiation between peer QoS Brokers belonging to different domains, so it allows the guarantee of seamless end-to-end QoS. This architecture is validated over a multi-domain testbed which uses the QoS DiffServ mechanism and supports IPv6.The architecture defined in the NGN allows QoS management at level 3 (IP) as well as at level 2 (e.g. Ethernet, WiFi) so it also facilitates the management of PLC networks. Through the use of a QoS Broker, this thesis proposes a theoretical approach for applying this control architecture to the newly standardized PLC networks, and discusses the possibilities of applying it over the future communication networks of the Smart Grids.Finally, a module for managing traffic engineering which optimizes the network domains through artificial intelligence techniques is integrated in the QoS Broker. The validations by simulations and over a Cisco router testbed demonstrate that hybrid genetic algorithms are an effective option in this area.Overall, the advances and key insights provided in this thesis help advance our understanding of QoS functioning in the NGNs and prepare these systems to face increasingly complex problems, which abound in current industrial and scientific applications

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    Intelligent Network Infrastructures: New Functional Perspectives on Leveraging Future Internet Services

    Get PDF
    The Internet experience of the 21st century is by far very different from that of the early '80s. The Internet has adapted itself to become what it really is today, a very successful business platform of global scale. As every highly successful technology, the Internet has suffered from a natural process of ossification. Over the last 30 years, the technical solutions adopted to leverage emerging applications can be divided in two categories. First, the addition of new functionalities either patching existing protocols or adding new upper layers. Second, accommodating traffic grow with higher bandwidth links. Unfortunately, this approach is not suitable to provide the proper ground for a wide gamma of new applications. To be deployed, these future Internet applications require from the network layer advanced capabilities that the TCP/IP stack and its derived protocols can not provide by design in a robust, scalable fashion. NGNs (Next Generation Networks) on top of intelligent telecommunication infrastructures are being envisioned to support future Internet Services. This thesis contributes with three proposals to achieve this ambitious goal. The first proposal presents a preliminary architecture to allow NGNs to seamlessly request advanced services from layer 1 transport networks, such as QoS guaranteed point-to-multipoint circuits. This architecture is based on virtualization techniques applied to layer 1 networks, and hides from NGNs all complexities of interdomain provisioning. Moreover, the economic aspects involved were also considered, making the architecture attractive to carriers. The second contribution regards a framework to develop DiffServ-MPLS capable networks based exclusively on open source software and commodity PCs. The developed DiffServ-MPLS flexible software router was designed to allow NGN prototyping, that make use of pseudo virtual circuits and assured QoS as a starting point of development. The third proposal presents a state of the art routing and wavelength assignment algorithm for photonic networks. This algorithm considers physical layer impairments to 100% guarantee the requested QoS profile, even in case of single network failures. A number of novel techniques were applied to offer lower blocking probability when compared with recent proposed algorithms, without impacting on setup delay time

    Architectures and technologies for quality of service provisioning in next generation networks

    Get PDF
    A NGN is a telecommunication network that differs from classical dedicated networks because of its capability to provide voice, video, data and cellular services on the same infrastructure (Quadruple-Play). The ITU-T standardization body has defined the NGN architecture in three different and well-defined strata: the transport stratum which takes care of maintaining end-to-end connectivity, the service stratum that is responsible for enabling the creation and the delivery of services, and finally the application stratum where applications can be created and executed. The most important separation in this architecture is relative to transport and service stratum. The aim is to enable the flexibility to add, maintain and remove services without any impact on the transport layer; to enable the flexibility to add, maintain and remove transport technologies without any impact on the access to service, application, content and information; and finally the efficient cohesistence of multiple terminals, access technologies and core transport technologies. The Service Oriented Architecture (SOA) is a paradigm often used in systems deployment and integration for organizing and utilizing distributed capabilities under the control of different ownership domains. In this thesis, the SOA technologies in network architetures are surveyed following the NGN functional architecture as defined by the ITU-T. Within each stratum, the main logical functions that have been the subject of investigation according to a service-oriented approach have been highlighted. Moreover, a new definition of the NGN transport stratum functionalities according to the SOA paradigm is proposed; an implementation of the relevant services interfaces to analyze this approach with experimental results shows some insight on the potentialities of the proposed strategy. Within NGN architectures research topic, especially in IP-based network architectures, Traffic Engineering (TE) is referred to as a set of policies and algorithms aimed at balancing network traffic load so as to improve network resource utilization and guarantee the service specific end-to-end QoS. DS-TE technology extends TE functionalities to a per-class basis implementation by introducing a higher level of traffic classification which associates to each class type (CT) a constraint on bandwidth utilization. These constraints are set by defining and configuring a bandwidth constraint (BC) model whih drives resource utilization aiming to higher load balancing, higher QoS performance and lower call blocking rate. Default TE implementations relies on a centralized approach to bandwidth and routing management, that require external management entities which periodically collect network status information and provide management actions. However, due to increasing network complexity, it is desiderable that nodes automatically discover their environment, self-configure and update to adapt to changes. In this thesis the bandwidth management problem is approached adopting an autonomic and distributed approach. Each node has a self-management module, which monitors the unreserved bandwidth in adjacent nodes and adjusts the local bandwidth constraints so as to reduce the differences in the unreserved bandwidth of neighbor nodes. With this distributed and autonomic algorithm, BC are dinamically modified to drive routing decision toward the traffic balancing respecting the QoS constraints for each class-type traffic requests. Finally, Video on Demand (VoD) is a service that provides a video whenever the customer requests it. Realizing a VoD system by means of the Internet network requires architectures tailored to video features such as guaranteed bandwidths and constrained transmission delays: these are hard to be provided in the traditional Internet architecture that is not designed to provide an adequate quality of service (QoS) and quality of experience (QoE) to the final user. Typical VoD solutions can be grouped in four categories: centralized, proxy-based, Content Delivery Network(CDN) and Hybrid architectures. Hybrid architectures combine the employment of a centralized server with that of a Peer-to-peer (P2P) network. This approach can effectively reduce the server load and avoid network congestions close to the server site because the peers support the delivery of the video to other peers using a cache-and-relay strategy making use of their upload bandwidth. Anyway, in a peer-to-peer network each peer is free to join and leave the network without notice, bringing to the phenomena of peer churns. These dynamics are dangerous for VoD architectures, affecting the integrity and retainability of the service. In this thesis, a study aimed to evaluate the impact of the peer churn on the system performance is proposed. Starting from important relationships between system parameters such as playback buffer length, peer request rate, peer average lifetime and server upload rate, four different analytic models are proposed

    Architectures and technologies for quality of service provisioning in next generation networks

    Get PDF
    A NGN is a telecommunication network that differs from classical dedicated networks because of its capability to provide voice, video, data and cellular services on the same infrastructure (Quadruple-Play). The ITU-T standardization body has defined the NGN architecture in three different and well-defined strata: the transport stratum which takes care of maintaining end-to-end connectivity, the service stratum that is responsible for enabling the creation and the delivery of services, and finally the application stratum where applications can be created and executed. The most important separation in this architecture is relative to transport and service stratum. The aim is to enable the flexibility to add, maintain and remove services without any impact on the transport layer; to enable the flexibility to add, maintain and remove transport technologies without any impact on the access to service, application, content and information; and finally the efficient cohesistence of multiple terminals, access technologies and core transport technologies. The Service Oriented Architecture (SOA) is a paradigm often used in systems deployment and integration for organizing and utilizing distributed capabilities under the control of different ownership domains. In this thesis, the SOA technologies in network architetures are surveyed following the NGN functional architecture as defined by the ITU-T. Within each stratum, the main logical functions that have been the subject of investigation according to a service-oriented approach have been highlighted. Moreover, a new definition of the NGN transport stratum functionalities according to the SOA paradigm is proposed; an implementation of the relevant services interfaces to analyze this approach with experimental results shows some insight on the potentialities of the proposed strategy. Within NGN architectures research topic, especially in IP-based network architectures, Traffic Engineering (TE) is referred to as a set of policies and algorithms aimed at balancing network traffic load so as to improve network resource utilization and guarantee the service specific end-to-end QoS. DS-TE technology extends TE functionalities to a per-class basis implementation by introducing a higher level of traffic classification which associates to each class type (CT) a constraint on bandwidth utilization. These constraints are set by defining and configuring a bandwidth constraint (BC) model whih drives resource utilization aiming to higher load balancing, higher QoS performance and lower call blocking rate. Default TE implementations relies on a centralized approach to bandwidth and routing management, that require external management entities which periodically collect network status information and provide management actions. However, due to increasing network complexity, it is desiderable that nodes automatically discover their environment, self-configure and update to adapt to changes. In this thesis the bandwidth management problem is approached adopting an autonomic and distributed approach. Each node has a self-management module, which monitors the unreserved bandwidth in adjacent nodes and adjusts the local bandwidth constraints so as to reduce the differences in the unreserved bandwidth of neighbor nodes. With this distributed and autonomic algorithm, BC are dinamically modified to drive routing decision toward the traffic balancing respecting the QoS constraints for each class-type traffic requests. Finally, Video on Demand (VoD) is a service that provides a video whenever the customer requests it. Realizing a VoD system by means of the Internet network requires architectures tailored to video features such as guaranteed bandwidths and constrained transmission delays: these are hard to be provided in the traditional Internet architecture that is not designed to provide an adequate quality of service (QoS) and quality of experience (QoE) to the final user. Typical VoD solutions can be grouped in four categories: centralized, proxy-based, Content Delivery Network(CDN) and Hybrid architectures. Hybrid architectures combine the employment of a centralized server with that of a Peer-to-peer (P2P) network. This approach can effectively reduce the server load and avoid network congestions close to the server site because the peers support the delivery of the video to other peers using a cache-and-relay strategy making use of their upload bandwidth. Anyway, in a peer-to-peer network each peer is free to join and leave the network without notice, bringing to the phenomena of peer churns. These dynamics are dangerous for VoD architectures, affecting the integrity and retainability of the service. In this thesis, a study aimed to evaluate the impact of the peer churn on the system performance is proposed. Starting from important relationships between system parameters such as playback buffer length, peer request rate, peer average lifetime and server upload rate, four different analytic models are proposed

    Esquema de controlo para redes multicast baseadas com classes

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaThe expectations of citizens from the Information Technologies (ITs) are increasing as the ITs have become integral part of our society, serving all kinds of activities whether professional, leisure, safety-critical applications or business. Hence, the limitations of the traditional network designs to provide innovative and enhanced services and applications motivated a consensus to integrate all services over packet switching infrastructures, using the Internet Protocol, so as to leverage flexible control and economical benefits in the Next Generation Networks (NGNs). However, the Internet is not capable of treating services differently while each service has its own requirements (e.g., Quality of Service - QoS). Therefore, the need for more evolved forms of communications has driven to radical changes of architectural and layering designs which demand appropriate solutions for service admission and network resources control. This Thesis addresses QoS and network control issues, aiming to improve overall control performance in current and future networks which classify services into classes. The Thesis is divided into three parts. In the first part, we propose two resource over-reservation algorithms, a Class-based bandwidth Over-Reservation (COR) and an Enhanced COR (ECOR). The over-reservation means reserving more bandwidth than a Class of Service (CoS) needs, so the QoS reservation signalling rate is reduced. COR and ECOR allow for dynamically defining over-reservation parameters for CoSs based on network interfaces resource conditions; they aim to reduce QoS signalling and related overhead without incurring CoS starvation or waste of bandwidth. ECOR differs from COR by allowing for optimizing control overhead minimization. Further, we propose a centralized control mechanism called Advanced Centralization Architecture (ACA), that uses a single state-full Control Decision Point (CDP) which maintains a good view of its underlying network topology and the related links resource statistics on real-time basis to control the overall network. It is very important to mention that, in this Thesis, we use multicast trees as the basis for session transport, not only for group communication purposes, but mainly to pin packets of a session mapped to a tree to follow the desired tree. Our simulation results prove a drastic reduction of QoS control signalling and the related overhead without QoS violation or waste of resources. Besides, we provide a generic-purpose analytical model to assess the impact of various parameters (e.g., link capacity, session dynamics, etc.) that generally challenge resource overprovisioning control. In the second part of this Thesis, we propose a decentralization control mechanism called Advanced Class-based resource OverpRovisioning (ACOR), that aims to achieve better scalability than the ACA approach. ACOR enables multiple CDPs, distributed at network edge, to cooperate and exchange appropriate control data (e.g., trees and bandwidth usage information) such that each CDP is able to maintain a good knowledge of the network topology and the related links resource statistics on real-time basis. From scalability perspective, ACOR cooperation is selective, meaning that control information is exchanged dynamically among only the CDPs which are concerned (correlated). Moreover, the synchronization is carried out through our proposed concept of Virtual Over-Provisioned Resource (VOPR), which is a share of over-reservations of each interface to each tree that uses the interface. Thus, each CDP can process several session requests over a tree without requiring synchronization between the correlated CDPs as long as the VOPR of the tree is not exhausted. Analytical and simulation results demonstrate that aggregate over-reservation control in decentralized scenarios keep low signalling without QoS violations or waste of resources. We also introduced a control signalling protocol called ACOR Protocol (ACOR-P) to support the centralization and decentralization designs in this Thesis. Further, we propose an Extended ACOR (E-ACOR) which aggregates the VOPR of all trees that originate at the same CDP, and more session requests can be processed without synchronization when compared with ACOR. In addition, E-ACOR introduces a mechanism to efficiently track network congestion information to prevent unnecessary synchronization during congestion time when VOPRs would exhaust upon every session request. The performance evaluation through analytical and simulation results proves the superiority of E-ACOR in minimizing overall control signalling overhead while keeping all advantages of ACOR, that is, without incurring QoS violations or waste of resources. The last part of this Thesis includes the Survivable ACOR (SACOR) proposal to support stable operations of the QoS and network control mechanisms in case of failures and recoveries (e.g., of links and nodes). The performance results show flexible survivability characterized by fast convergence time and differentiation of traffic re-routing under efficient resource utilization i.e. without wasting bandwidth. In summary, the QoS and architectural control mechanisms proposed in this Thesis provide efficient and scalable support for network control key sub-systems (e.g., QoS and resource control, traffic engineering, multicasting, etc.), and thus allow for optimizing network overall control performance.À medida que as Tecnologias de Informação (TIs) se tornaram parte integrante da nossa sociedade, a expectativa dos cidadãos relativamente ao uso desses serviços também demonstrou um aumento, seja no âmbito das atividades profissionais, de lazer, aplicações de segurança crítica ou negócios. Portanto, as limitações dos projetos de rede tradicionais quanto ao fornecimento de serviços inovadores e aplicações avançadas motivaram um consenso quanto à integração de todos os serviços e infra-estruturas de comutação de pacotes, utilizando o IP, de modo a extrair benefícios económicos e um controlo mais flexível nas Redes de Nova Geração (RNG). Entretanto, tendo em vista que a Internet não apresenta capacidade de diferenciação de serviços, e sabendo que cada serviço apresenta as suas necessidades próprias, como por exemplo, a Qualidade de Serviço - QoS, a necessidade de formas mais evoluídas de comunicação tem-se tornado cada vez mais visível, levando a mudanças radicais na arquitectura das redes, que exigem soluções adequadas para a admissão de serviços e controlo de recursos de rede. Sendo assim, este trabalho aborda questões de controlo de QoS e rede com o objetivo de melhorar o desempenho do controlo de recursos total em redes atuais e futuras, através da análise dos serviços de acordo com as suas classes de serviço. Esta Tese encontra-se dividida em três partes. Na primeira parte são propostos dois algoritmos de sobre-reserva, o Class-based bandwidth Over-Reservation (COR) e uma extensão melhorada do COR denominado de Enhanced COR (ECOR). A sobre-reserva significa a reserva de uma largura de banda maior para o serviço em questão do que uma classe de serviço (CoS) necessita e, portanto, a quantidade de sinalização para reserva de recursos é reduzida. COR e ECOR consideram uma definição dinâmica de sobre-reserva de parâmetros para CoSs com base nas condições da rede, com vista à redução da sobrecarga de sinalização em QoS sem que ocorra desperdício de largura de banda. O ECOR, por sua vez, difere do COR por permitir a otimização com minimização de controlo de overhead. Além disso, nesta Tese é proposto também um mecanismo de controlo centralizado chamado Advanced Centralization Architecture (ACA) , usando um único Ponto de Controlo de Decisão (CDP) que mantém uma visão ampla da topologia de rede e de análise dos recursos ocupados em tempo real como base de controlo para a rede global. Nesta Tese são utilizadas árvores multicast como base para o transporte de sessão, não só para fins de comunicação em grupo, mas principalmente para que os pacotes que pertençam a uma sessão que é mapeada numa determinada árvore sigam o seu caminho. Os resultados obtidos nas simulações dos mecanismos mostram uma redução significativa da sobrecarga da sinalização de controlo, sem a violação dos requisitos de QoS ou desperdício de recursos. Além disso, foi proposto um modelo analítico no sentido de avaliar o impacto provocado por diversos parâmetros (como por exemplo, a capacidade da ligação, a dinâmica das sessões, etc), no sobre-provisionamento dos recursos. Na segunda parte desta tese propôe-se um mecanismo para controlo descentralizado de recursos denominado de Advanced Class-based resource OverprRovisioning (ACOR), que permite obter uma melhor escalabilidade do que o obtido pelo ACA. O ACOR permite que os pontos de decisão e controlo da rede, os CDPs, sejam distribuídos na periferia da rede, cooperem entre si, através da troca de dados e controlo adequados (por exemplo, localização das árvores e informações sobre o uso da largura de banda), de tal forma que cada CDP seja capaz de manter um bom conhecimento da topologia da rede, bem como das suas ligações. Do ponto de vista de escalabilidade, a cooperação do ACOR é seletiva, o que significa que as informações de controlo são trocadas de forma dinâmica apenas entre os CDPs analisados. Além disso, a sincronização é feita através do conceito proposto de Recursos Virtuais Sobre-Provisionado (VOPR), que partilha as reservas de cada interface para cada árvore que usa a interface. Assim, cada CDP pode processar pedidos de sessão numa ou mais árvores, sem a necessidade de sincronização entre os CDPs correlacionados, enquanto o VOPR da árvore não estiver esgotado. Os resultados analíticos e de simulação demonstram que o controlo de sobre-reserva é agregado em cenários descentralizados, mantendo a sinalização de QoS baixa sem perda de largura de banda. Também é desenvolvido um protocolo de controlo de sinalização chamado ACOR Protocol (ACOR-P) para suportar as arquitecturas de centralização e descentralização deste trabalho. O ACOR Estendido (E-ACOR) agrega a VOPR de todas as árvores que se originam no mesmo CDP, e mais pedidos de sessão podem ser processados sem a necessidade de sincronização quando comparado com ACOR. Além disso, E-ACOR introduz um mecanismo para controlar as informações àcerca do congestionamento da rede, e impede a sincronização desnecessária durante o tempo de congestionamento quando os VOPRs esgotam consoante cada pedido de sessão. A avaliação de desempenho, através de resultados analíticos e de simulação, mostra a superioridade do E-ACOR em minimizar o controlo geral da carga da sinalização, mantendo todas as vantagens do ACOR, sem apresentar violações de QoS ou desperdício de recursos. A última parte desta Tese inclui a proposta para recuperação a falhas, o Survivability ACOR (SACOR), o qual permite ter QoS estável em caso de falhas de ligações e nós. Os resultados de desempenho analisados mostram uma capacidade flexível de sobrevivência caracterizada por um tempo de convergência rápido e diferenciação de tráfego com uma utilização eficiente dos recursos. Em resumo, os mecanismos de controlo de recursos propostos nesta Tese fornecem um suporte eficiente e escalável para controlo da rede, como também para os seus principais sub-sistemas (por exemplo, QoS, controlo de recursos, engenharia de tráfego, multicast, etc) e, assim, permitir a otimização do desempenho da rede a nível do controlo global

    Solution strategies of service fulfilment Operation Support Systems for Next Generation Networks

    Get PDF
    Suomalainen operatiivisten tukijärjestelmien toimittaja tarjoaa ratkaisuja palvelujen aktivointiin, verkkoresurssien hallintaan ja laskutustietojen keruuseen. Nämä ratkaisut ovat pääosin käytössä langattomissa verkoissa. Tässä tutkimuksessa arvioidaan kyseisten ratkaisujen soveltuvuutta palvelutoimitusprosessien automatisointiin tulevaisuuden verkkoympäristöissä. Tarkastelun kohteena ovat runko- ja pääsyverkkojen kiinteät teknologiat, joiden suosio saavuttaa huippunsa seuraavan 5-10 vuoden aikana. Näissä verkoissa palvelujen, kuten yritys-VPN:n tai kuluttajan laajakaistan, aktivointi vaatii monimutkaisen toimitusprosessin, jonka tueksi tarvitaan ensiluokkaista tukijärjestelmää. Teknologiakatsauksen jälkeen tutkimuksessa verrataan viitteellistä tuoteportfoliota saatavilla oleviin operatiivisten tukijärjestelmien arkkitehtuurisiin viitekehyksiin, ja analysoidaan sen soveltuvuus tulevaisuuden verkkoympäristöjen palvelutoimitusprosessin automatisointiin. Myös palvelutoimitusprosessien automatisointiin soveltuvien tukijärjestelmien markkinatilanne arvioidaan, ja tämän perusteella tutkitaan optimaalisinta sovellusstrategiaa. Lopulta voidaan päätellä, että tuoteportfoliolle parhaiten soveltuvin sovellusalue on kuluttajan laajakaistan, ja siihen liittyvien kehittyneempien IP-palveluiden palvelutoimitusprosessien automatisointi.A Finnish Operation Support Systems (OSS) vendor provides solutions for service activation, network inventory and event mediation. These solutions have mostly been deployed in mobile environments. In this thesis it will be studied how feasible it is to use similar solutions for service fulfilment in Next Generation Networks (NGN). NGN is a broad term that describes some key architectural evolutions in telecommunication core and access networks that will be deployed over the next 5 to 10 years. In these networks service, e.g. Triple Play or Virtual Private Network (VPN), activations require an extensive service fulfilment process that must be supported by first-class OSS. After introducing the NGN technologies, the research compares a reference product portfolio to available service fulfilment frameworks and evaluates the applicability. The study analyses the current state of service fulfilment OSS markets and evaluates various solution strategies. Eventually it will be concluded that the most interesting and adequate solution scenario is residential broadband, including value-added IP services

    Partage d'infrastructures et convergence fixe/mobile dans les réseaux 3GPP de prochaine génération

    Get PDF
    RÉSUMÉ Le déploiement de la technologie cellulaire de quatrième génération a débuté par quelques projets pilotes, notamment en Suède et en Norvège, dans la première moitié de 2010. Ces réseaux offrent dans un premier temps l’accès à Internet uniquement et comptent sur les réseaux de deuxième et troisième génération existants pour le support de la téléphonie et de la messagerie texte. Ce ne sera donc qu’avec l’avènement du IP Multimedia Subsystem (IMS) que tous les services seront supportés par la nouvelle architecture basée entièrement sur IP. Les réseaux mobiles de quatrième génération promettent aux usagers des taux de transfert au-delà de 100 Mbits/s en amont, lorsque l’usager est immobile, et le support de la qualité de service permettant d’offrir des garanties de débit, délai maximum, gigue maximale et d’un taux de perte de paquets borné supérieurement. Ces réseaux supporteront efficacement les applications utilisant la géolocalisation afin d’améliorer l’expérience de l’usager. Les terminaux d’aujourd’hui offrent un éventail de technologies radio. En effet, en plus du modem cellulaire, les terminaux supportent souvent la technologie Bluetooth qui est utilisée pour connecter entre autres les dispositifs mains-libres et les écouteurs. De plus, la majorité des téléphones cellulaires sont dotés d’un accès WiFi permettant à l’usager de transférer de grands volumes de données sans engorger le réseau cellulaire. Toutefois, cet accès n’est souvent réservé qu’au réseau résidentiel de l’usager ou à celui de son lieu de travail. Enfin, une relève verticale est presque toujours manuelle et entraîne pour le mobile un changement d’adresse IP, ce qui ultimement a pour conséquence une déconnexion des sessions en cours. Depuis quelques années, une tendance se profile au sein de l’industrie qui est connue sous de nom de convergence des réseaux fixes et mobiles. Cette tendance vise à plus ou moins long terme d’offrir l’accès Internet et la téléphonie à partir d’un seul terminal pouvant se connecter à un réseau d’accès local ou au réseau cellulaire. à ce jour, très peu d’opérateurs (e.g., NTT Docomo) offrent des terminaux ayant la possibilité de changer de point d’accès. Toutefois, le point d’accès doit appartenir à l’usager ou se situe à son lieu de travail. Par ailleurs, on remarque un mouvement de convergence selon lequel différents réseaux utilisés pour les services d’urgence (tels que la police, les pompiers et ambulanciers) sont progressivement migrés (en raison de leurs coûts prohibitifs) vers un seul réseau offrant un très haut niveau de redondance et de fiabilité. Les services d’urgence démontrent des besoins en QoS similaires à ceux des particuliers sauf qu’ils nécessitent un accès prioritaire, ce qui peut entraîner la déconnexion d’un usager non-prioritaire lors d’une situation de congestion. En plus des services publics qui tentent de réduire leurs coûts d’exploitation en partageant l’accès aux réseaux commerciaux de communications, les opérateurs de ces réseaux sont aussi entrés dans une phase de réduction de coûts. Cette situation résulte du haut niveau de maturité maintenant atteint par l’industrie des communications mobiles. Par exemple, l’image de marque ou la couverture offerte par chacun d’eux ne constituent plus en soi un argument de vente suffisant pour attirer une nouvelle clientèle. Ceux-ci doivent donc se distinguer par une offre de services supérieure à celle de leur compétition. Les opérateurs ont donc entrepris de sous-traiter des opérations non-critiques de leur entreprise afin de se concentrer sur l’aspect le plus profitable de cette dernière. Parallèlement à cette tendance, les opérateurs ont commencé à partager une portion de plus en plus importante de leurs infrastructures physiques avec leurs compétiteurs. Dans un premier temps, le partage s’est limité aux sites des stations de base et aux mâts qui supportent les antennes. Puis vint le partage des abris pour réduire les coûts de climatisation et d’hébergement des équipements. Ensuite, les opérateurs se mirent à partager les équipements radio, chacun contrôlant toutefois ses propres bandes de fréquences. . . Le partage des infrastructures physiques au-delà du premier nœud du réseau cœur n’est pas actuellement supporté en standardisation. Les propositions existantes d’architectures de réseaux de prochaine génération ont toutes comme point en commun d’être basées sur un réseau cœur tout-IP, d’offrir une QoS aux applications et une performance de l’ordre de 100 Mbits/s. De plus, ces dernières proposent des mécanismes de gestion des politiques qui définissent l’utilisation des services offerts aux abonnés ainsi que la façon de comptabiliser l’usage des ressources du réseau. On dénombre trois grandes catégories de politiques : celles se rattachant à l’usager (e.g., les abonnements or/argent/bronze, accès facturé vs. prépayé), celles qui dépendent du service demandé (e.g., pour un service donné, la bande passante maximale, la classe de service et la priorité d’allocation et de rétention des ressources) et enfin les politiques relatives à l’état du réseau (e.g., niveau de congestion, répartition des agrégats de trafic, etc). Dans un premier article dont le titre est « A Potential Evolution of the Policy and Charging Control/QoS Architecture for the 3GPP IETF-based Evolved Packet Core », les aspects de FMC ainsi que du partage du réseau cœur sont traités conjointement puisqu’il faut que l’architecture PCC reflète les réalités des tendances de l’industrie décrites précédemment. Suite à la description des tendances de l’industrie furent présentés les requis d’une architecture PCC qui rendent possibles la convergence des services (capacité d’utiliser un service à partir de n’importe quel accès), le partage du réseau cœur par plusieurs opérateurs mobiles virtuels , la création de politiques propres à chaque réseau d’accès ainsi que la micro-mobilité efficace des usagers dans les scénarios d’itinérance. Dans un second temps, deux architectures de NGN furent évaluées en fonction des requis énumérés ci-dessus. Cette étude permit de déterminer qu’une solution hybride (avec les avantages de chacune mais sans leurs défauts respectifs) constituait une piste de solution prometteuse qui servit de base à notre proposition. La solution proposée atteint son but par une meilleure répartition des rôles d’affaires ainsi que par l’introduction d’une entité centrale de contrôle nommée Network Policy Function (NPF) au sein du réseau de transport IP. En effet, les rôles d’affaires définis (fournisseurs d’accès, de réseau cœur et de services) permettent la création de domaines de politiques et administratifs distincts. Ces rôles deviennent nécessaires dans les cas de partage d’infrastructures. Dans le cas contraire, ils sont compatibles avec le modèle vertical actuel d’opérateur ; ce dernier joue alors tous les rôles. Quant à l’introduction du NPF dans le réseau cœur, celui-ci permet de séparer la gestion des politiques régissant le réseau de transport IP des usagers, des services et des réseaux d’accès. De plus, il permet le partage du réseau cœur de façon à respecter les ententes de services liant ce dernier à chaque opérateur virtuel ainsi que les ententes de services liant le réseau cœur et le(s) réseau(x) d’accès. Par ailleurs, le NPF permet d’ajouter au réseau cœur des services avancés à partager entre plusieurs opérateurs. Parmi ces services, on retrouve des fonctions de transcodage audio/vidéo, des caches de fichiers (e.g., pouvant servir à la distribution de films), d’antivirus grâce à l’inspection approfondie des paquets, etc. L’avantage d’introduire ces services au niveau transport est de permettre autant aux applications IMS qu’aux autres d’en bénéficier. Le second article intitulé « A Network Policy Function Node for a Potential Evolution of the 3GPP Evolved Packet Core » constitue une extension du premier article qui décrit en détail les tendances de l’industrie, les architectures de gestion de politiques existantes et leurs caractéristiques, et enfin offrit un survol de la solution. En contre-partie, le second article aborde beaucoup plus en détail les impacts de la solution proposée sur l’architecture existante. En effet, une contribution significative de ce second article est de dresser la liste exhaustive de toutes les simplifications potentielles que permet la proposition d’architecture. La contribution majeure du second article est que la solution proposée peut être déployée immédiatement avec un minimum d’impacts. Effectivement, une petite modification à l’architecture proposée dans le premier article, au niveau des interfaces du NPF, permit cette avancée. En conséquence, cette modification réconcilie les deux variantes actuelles d’architecture basées sur les protocoles GPRS Tunneling Protocol (GTP) et Proxy Mobile IPv6 (PMIPv6). Le dernier apport important du second article est la démonstration du fonctionnement interne du NPF lorsque ce dernier contrôle un réseau de transport basé sur un mécanisme de tunnels tels que Multi-Protocol Label Switching (MPLS) ou encore Provider Backbone Bridge-Traffic Engineering (PBB-TE). Un processus d’ingénierie de trafic permet aux flux de trafic de contourner une zone de congestion, de mieux balancer la charge du réseau et d’assurer que les exigences en QoS sont toujours respectées. Le troisième article intitulé « A MultiAccess Resource ReSerVation Protocol (MARSVP) for the 3GPP Evolved Packet System » traite de QoS dans les scénarios de FMC, plus particulièrement des applications qui ne sont pas supportées par le réseau. Par exemple, toutes les applications pair-à-pair qui représentent une portion infime du volume de trafic total attribué à ce type d’application ou celles qui sont naissantes et encore méconnues. Les réseaux de deuxième et troisième générations ont été conçus de telle sorte que l’usager fournit au réseau les paramètres de QoS de l’application. Toutefois, le nombre de combinaisons des paramètres de QoS était très élevé et trop complexe à gérer. Il en résulta que pour la quatrième génération il fut décidé que dorénavant ce seraient les serveurs d’applications dans le réseau qui fourniraient ces paramètres de QoS. De même, un nombre restreint de classes de services fut défini, ce qui eut pour résultat de simplifier énormément la gestion de la QoS. Lorsque sont considérés les concepts de FMC, il devient évident que le mécanisme décrit ci-dessus ne s’applique qu’aux accès 3GPP. En effet, chaque type d’accès définit ses propres mécanismes qui doivent souvent être contrôlés par le réseau et non par l’usager. De plus, certains accès ne disposent d’aucun canal de contrôle sur lequel circule les requêtes de QoS. De même, les protocoles existants de QoS sont souvent lourds et définis de bout-en-bout ; ils ne sont donc pas appropriés à l’utilisation qui est envisagée. En conséquence, la solution proposée consiste en un nouveau protocole multiaccès de réservation de ressources. MARSVP utilise le canal de données que l’on retrouve sur tous les accès et confine les échanges de messages entre l’usager et le premier nœud IP. Les besoins en QoS sont définis en fonction des QoS Class Indicators (QCIs) ce qui rend MARSVP simple à utiliser. Suite à une requête de réservation de ressources acceptée par le réseau, ce dernier configure l’accès et retourne au terminal les informations requises à l’envoi paquets (aux couches 2 et 3).----------ABSTRACT Fourth generation cellular networks trials have begun in the first half of 2010, notably in Sweden and Norway. As a first step, these networks only offer Internet access and rely on existing second and third generation networks for providing telephony and text messaging. It’s only after the deployment of the IP Multimedia Subsystem (IMS) that all services shall be supported on the new all-IP architecture. Fourth generation mobile networks should enable end users to benefit from data throughputs of at least 100 Mbps on the downlink, when the user is stationary, and of Quality of Service (QoS) support that allows guarantees on throughput, maximum delay, maximum jitter and on the packet loss rate. These networks will efficiently support applications that rely on geolocation in order to improve the user’s Quality of Experience (QoE). Today’s terminals can communicate using several radio technologies. Indeed, in addition to the cellular modem, terminals often support the Bluetooth technology which is used for connecting handsfree devices and headsets. Moreover, most cell phones feature a Wi-Fi interface that enables users to transfer huge volumes of data without congesting the cellular network. However, Wi-Fi connectivity is often restricted to the user’s home network or his workplace. Finally, a vertical handover is nearly always done manually and forces the terminal to change its IP address, which ultimately disrupts all active data sessions. A trend has emerged a few years ago among the mobile communications industry known as Fixed-Mobile Convergence (FMC). FMC is a trend aiming to provide Internet access and telephony on a single device capable of switching between local- and wide-area networks. At this time, very few operators (e.g., NTT Docomo) offer terminals capable of switching to another access automatically. However, the access point must belong to the user or be installed in his workplace. At the same time, another kind of convergence has begun in which the dedicated networks for public safety (such as police, fire prevention and ambulances) are being progressively migrated (because of their high operational costs) toward a single highly reliable and redundant network. Indeed, these services exhibit QoS requirements that are similar to residential costumers’ except they need a prioritized access, and that can terminate a non-priority user’s session during congestion situations. In addition to the public services that seek to reduce their operational costs by sharing commercial communications networks, the network operators have also entered a cost reduction phase. This situation is a result of the high degree of maturity that the mobile communications industry has reached. As an example, the branding or the coverage offered by each of them isn’t a sufficient sales argument anymore to enroll new subscribers. Operators must now distinguish themselves from their competition with a superior service offering. Some operators have already started to outsource their less profitable business activities in order to concentrate on their key functions. As a complement to this trend, operators have begun to share an ever increasing portion of their physical infrastructures with their competitors. As a first step, infrastructure sharing was limited to the base station sites and antenna masts. Later, the shelters were shared to further reduce the cooling and hosting costs of the equipments. Then, operators started to share radio equipments but each of them operated on different frequency bands. . . Infrastructure sharing beyond the first core network node isn’t actually supported in standardization. There is an additional trend into the mobile communications industry which is the specialization of the operators (i.e., the identification of target customers by the operators). As a result, these operators experience disjoint traffic peaks because their customer bases have different behaviors. The former have a strong incentive to share infrastructures because network dimensioning mostly depends on the peak demand. Consequently, sharing infrastructures increases the average traffic load without significantly increasing the peak load because the peaks occur at different times. This allows operators to boost their return on investment. Every existing Next Generation Network (NGN) architecture proposal features an all-IP core network, offers QoS to applications and a bandwidth on the downlink in the order of 100 Mbps. Moreover, these NGNs propose a number of Policy and Charging Control (PCC) mechanisms that determine how services are delivered to the subscribers and what charging method to apply. There are three main categories of policies: those that are related to the subscriber (e.g., gold/silver/bronze subscription, prepaid vs. billed access), those that apply to services (e.g., for a given service, bandwidth limitation, QoS class assignment, allocation and retention priority of resources) and finally policies that depend on the current state of the network (e.g., congestion level, traffic engineering, etc). In a first paper entitled “A Potential Evolution of the Policy and Charging Control/QoS Architecture for the 3GPP IETF-based Evolved Packet Core ”, FMC and Core Network (CN) sharing aspects are treated simultaneously because it is important that the logical PCC architecture reflects the realities of the industry trends described above. Following the description of the trends in the communications industry were presented a list of four requirements that enable for a PCC architecture: service convergence (capacity to use a service from any type of access), CN sharing that allows several Mobile Virtual Network Operators (MVNOs) to coexist, the creation of local access network policies as well as efficient micro-mobility in roaming scenarios. As a second step, two NGN architectures were evaluated upon the requirements mentioned above. This evaluation concluded that a hybrid solution (based on the key features of each architecture but without their respective drawbacks) would offer a very promising foundation for a complete solution. The proposed solution achieved its goal with a clearer separation of the business roles (e.g., access and network providers) and the introduction of a Network Policy Function (NPF) for the management of the CN. Indeed, the business roles that were defined allow the creation of distinct policy/QoS and administrative domains. The roles become mandatory in infrastructure sharing scenarios. Otherwise, they maintain the compatibility with the actual vertically-integrated operator model; the latter then plays all of the business roles. Introducing the NPF into the CN enables the CN policy management to be separated from policy management related to subscribers, services and access networks. Additionally, the NPF allows the CN to be shared by multiple Network Service Providers (NSPs) and respect the Service Level Agreements (SLAs) that link the IP Aggregation Network (IPAN) to the NSPs, as well as those that tie the IPAN to the Access Network Providers (ANPs). Another benefit of the NPF is that it can share a number of advanced functions between several NSPs. Those functions include audio/video transcoding, file caches (e.g., that can be used for multimedia content delivery), Deep Packet Inspection (DPI) antivirus, etc. The main advantage to integrate those infrastructure services at the IP transport level is to allow both IMS and non-IMS applications to benefit from them. A second paper entitled “A Network Policy Function Node for a Potential Evolution of the 3GPP Evolved Packet Core ” constitutes an extension of the first paper that extensively described the industry trends, two existing PCC architectures and their characteristics, and finally offered an overview of the proposed solution. On the other hand, the second paper thoroughly describes all of the impacts that the proposal has on the existing 3GPP PCC architecture. Indeed, a significant contribution of this second paper is that it provides an extensive list of potential simplifications that the proposed solution allows. The main contribution of the second paper is that from now on the proposed solution can be deployed over an existing PCC architecture with a minimum of impacts. Indeed, a small modification to the NPF’s reference points enables this enhancement. As a consequence, this enhancement provided a solution that is compatible with both PCC architecture variants, based on either GPRS Tunneling Protocol (GTP) or Proxy Mobile IPv6 (PMIPv6). A last contribution of the second paper is to demonstrate the NPF’s internals when the former is controlling a an IPAN based on tunneling mechanisms such as Multi-Protocol Label Switching (MPLS) or Provider Backbone Bridge-Traffic Engineering (PBB-TE). A traffic engineering process allows traffic flow aggregates to pass around a congested node, to better balance the load between the network elements and make sure that the QoS requirements are respected at all times. The third paper entitled “A MultiAccess Resource ReSerVation Protocol (MARSVP) for the 3GPP Evolved Packet System” deals with QoS provisioning in FMC scenarios, especially for applications that are not directly supported by the network. As an example, all peer-to-peer applications (such as online gaming) that represent a small fraction of the total peer-to-peer traffic or those that are new and relatively unknown. Second and third generation networks were designed such that the User Equipment (UE) would provide the network with the application’s QoS parameters. However, the number of possible combinations of QoS parameters was very large and too complex to manage. As a result, for the fourth generation of networks, an application server would provide the PCC architecture with the right QoS parameters. In addition, a limited number of QoS classes were defined which in the end greatly simplified QoS management. When FMC aspects are taken into account, it becomes trivial that the above mechanism only applies to 3GPP accesses. Indeed, each access type uses its own mechanisms that must often be controlled by the network instead of the user. Moreover, some accesses don’t feature a control channel on which QoS reservation requests would be carried. Also, existing QoS protocols are often too heavy to support and apply
    corecore