54 research outputs found

    NFV orchestration in edge and fog scenarios

    Get PDF
    Mención Internacional en el título de doctorLas infraestructuras de red actuales soportan una variedad diversa de servicios como video bajo demanda, video conferencias, redes sociales, sistemas de educación, o servicios de almacenamiento de fotografías. Gran parte de la población mundial ha comenzado a utilizar estos servicios, y los utilizan diariamente. Proveedores de Cloud y operadores de infraestructuras de red albergan el tráfico de red generado por estos servicios, y sus tareas de gestión no solo implican realizar el enrutamiento del tráfico, sino también el procesado del tráfico de servicios de red. Tradicionalmente, el procesado del tráfico ha sido realizado mediante aplicaciones/ programas desplegados en servidores que estaban dedicados en exclusiva a tareas concretas como la inspección de paquetes. Sin embargo, en los últimos anos los servicios de red se han virtualizado y esto ha dado lugar al paradigma de virtualización de funciones de red (Network Function Virtualization (NFV) siguiendo las siglas en ingles), en el que las funciones de red de un servicio se ejecutan en contenedores o máquinas virtuales desacopladas de la infraestructura hardware. Como resultado, el procesado de tráfico se ha ido haciendo más flexible gracias al laxo acople del software y hardware, y a la posibilidad de compartir funciones de red típicas, como firewalls, entre los distintos servicios de red. NFV facilita la automatización de operaciones de red, ya que tareas como el escalado, o la migración son típicamente llevadas a cabo mediante un conjunto de comandos previamente definidos por la tecnología de virtualización pertinente, bien mediante contenedores o máquinas virtuales. De todos modos, sigue siendo necesario decidir el en rutamiento y procesado del tráfico de cada servicio de red. En otras palabras, que servidores tienen que encargarse del procesado del tráfico, y que enlaces de la red tienen que utilizarse para que las peticiones de los usuarios lleguen a los servidores finales, es decir, el conocido como embedding problem. Bajo el paraguas del paradigma NFV, a este problema se le conoce en inglés como Virtual Network Embedding (VNE), y esta tesis utiliza el termino “NFV orchestration algorithm” para referirse a los algoritmos que resuelven este problema. El problema del VNE es NP-hard, lo cual significa que que es imposible encontrar una solución optima en un tiempo polinómico, independientemente del tamaño de la red. Como consecuencia, la comunidad investigadora y de telecomunicaciones utilizan heurísticos que encuentran soluciones de manera más rápida que productos para la resolución de problemas de optimización. Tradicionalmente, los “NFV orchestration algorithms” han intentado minimizar los costes de despliegue derivados de las soluciones asociadas. Por ejemplo, estos algoritmos intentan no consumir el ancho de banda de la red, y usar rutas cortas para no utilizar tantos recursos. Además, una tendencia reciente ha llevado a la comunidad investigadora a utilizar algoritmos que minimizan el consumo energético de los servicios desplegados, bien mediante la elección de dispositivos con un consumo energético más eficiente, o mediante el apagado de dispositivos de red en desuso. Típicamente, las restricciones de los problemas de VNE se han resumido en un conjunto de restricciones asociadas al uso de recursos y consumo energético, y las soluciones se diferenciaban por la función objetivo utilizada. Pero eso era antes de la 5a generación de redes móviles (5G) se considerase en el problema de VNE. Con la aparición del 5G, nuevos servicios de red y casos de uso entraron en escena. Los estándares hablaban de comunicaciones ultra rápidas y fiables (Ultra-Reliable and Low Latency Communications (URLLC) usando las siglas en inglés) con latencias por debajo de unos pocos milisegundos y fiabilidades del 99.999%, una banda ancha mejorada (enhanced Mobile Broadband (eMBB) usando las siglas en inglés) con notorios incrementos en el flujo de datos, e incluso la consideración de comunicaciones masivas entre maquinas (Massive Machine-Type Communications (mMTC) usando las siglas en inglés) entre dispositivos IoT. Es más, paradigmas como edge y fog computing se incorporaron a la tecnología 5G, e introducían la idea de tener dispositivos de computo más cercanos al usuario final. Como resultado, el problema del VNE tenía que incorporar los nuevos requisitos como restricciones a tener en cuenta, y toda solución debía satisfacer bajas latencias, alta fiabilidad, y mayores tasas de transmisión. Esta tesis estudia el problema des VNE, y propone algunos heurísticos que lidian con las restricciones asociadas a servicios 5G en escenarios edge y fog, es decir, las soluciones propuestas se encargan de asignar funciones virtuales de red a servidores, y deciden el enrutamiento del trafico en las infraestructuras 5G con dispositivos edge y fog. Para evaluar el rendimiento de las soluciones propuestas, esta tesis estudia en primer lugar la generación de grafos que representan redes 5G. Los mecanismos propuestos para la generación de grafos sirven para representar distintos escenarios 5G. En particular, escenarios de federación en los que varios dominios comparten recursos entre ellos. Los grafos generados también representan servidores en el edge, así como dispositivos fog con una batería limitada. Además, estos grafos tienen en cuenta los requisitos de estándares, y la demanda que se espera en las redes 5G. La generación de grafos propuesta sirve para representar escenarios federación en los que varios dominios comparten recursos entre ellos, y redes 5G con servidores edge, así como dispositivos fog estáticos o móviles con una batería limitada. Los grafos generados para infraestructuras 5G tienen en cuenta los requisitos de estándares, y la demanda de red que se espera en las redes 5G. Además, los grafos son diferentes en función de la densidad de población, y el área de estudio, es decir, si es una zona industrial, una autopista, o una zona urbana. Tras detallar la generación de grafos que representan redes 5G, esta tesis propone algoritmos de orquestación NFV para resolver con el problema del VNE. Primero, se centra en escenarios federados en los que los servicios de red se tienen que asignar no solo a la infraestructura de un dominio, sino a los recursos compartidos en la federación de dominios. Dos problemas diferentes han sido estudiados, uno es el problema del VNE propiamente dicho sobre una infraestructura federada, y el otro es la delegación de servicios de red. Es decir, si un servicio de red se debe desplegar localmente en un dominio, o en los recursos compartidos por la federación de dominios; a sabiendas de que el último caso supone el pago de cuotas por parte del dominio local a cambio del despliegue del servicio de red. En segundo lugar, esta tesis propone OKpi, un algoritmo de orquestación NFV para conseguir la calidad de servicio de las distintas slices de las redes 5G. Conceptualmente, el slicing consiste en partir la red de modo que cada servicio de red sea tratado de modo diferente dependiendo del trozo al que pertenezca. Por ejemplo, una slice de eHealth reservara los recursos de red necesarios para conseguir bajas latencias en servicios como operaciones quirúrgicas realizadas de manera remota. Cada trozo (slice) está destinado a unos servicios específicos con unos requisitos muy concretos, como alta fiabilidad, restricciones de localización, o latencias de un milisegundo. OKpi es un algoritmo de orquestación NFV que consigue satisfacer los requisitos de servicios de red en los distintos trozos, o slices de la red. Tras presentar OKpi, la tesis resuelve el problema del VNE en redes 5G con dispositivos fog estáticos y móviles. El algoritmo de orquestación NFV presentado tiene en cuenta las limitaciones de recursos de computo de los dispositivos fog, además de los problemas de falta de cobertura derivados de la movilidad de los dispositivos. Para concluir, esta tesis estudia el escalado de servicios vehiculares Vehicle-to-Network (V2N), que requieren de bajas latencias para servicios como la prevención de choques, avisos de posibles riesgos, y conducción remota. Para estos servicios, los atascos y congestiones en la carretera pueden causar el incumplimiento de los requisitos de latencia. Por tanto, es necesario anticiparse a esas circunstancias usando técnicas de series temporales que permiten saber el tráfico inminente en los siguientes minutos u horas, para así poder escalar el servicio V2N adecuadamente.Current network infrastructures handle a diverse range of network services such as video on demand services, video-conferences, social networks, educational systems, or photo storage services. These services have been embraced by a significant amount of the world population, and are used on a daily basis. Cloud providers and Network operators’ infrastructures accommodate the traffic rates that the aforementioned services generate, and their management tasks do not only involve the traffic steering, but also the processing of the network services’ traffic. Traditionally, the traffic processing has been assessed via applications/programs deployed on servers that were exclusively dedicated to a specific task as packet inspection. However, in recent years network services have stated to be virtualized and this has led to the Network Function Virtualization (Network Function Virtualization (NFV)) paradigm, in which the network functions of a service run on containers or virtual machines that are decoupled from the hardware infrastructure. As a result, the traffic processing has become more flexible because of the loose coupling between software and hardware, and the possibility of sharing common network functions, as firewalls, across multiple network services. NFV eases the automation of network operations, since scaling and migrations tasks are typically performed by a set of commands predefined by the virtualization technology, either containers or virtual machines. However, it is still necessary to decide the traffic steering and processing of every network service. In other words, which servers will hold the traffic processing, and which are the network links to be traversed so the users’ requests reach the final servers, i.e., the network embedding problem. Under the umbrella of NFV, this problem is known as Virtual Network Embedding (VNE), and this thesis refers as “NFV orchestration algorithms” to those algorithms solving such a problem. The VNE problem is a NP-hard, meaning that it is impossible to find optimal solutions in polynomial time, no matter the network size. As a consequence, the research and telecommunications community rely on heuristics that find solutions quicker than a commodity optimization solver. Traditionally, NFV orchestration algorithms have tried to minimize the deployment costs derived from their solutions. For example, they try to not exhaust the network bandwidth, and use short paths to use less network resources. Additionally, a recent tendency led the research community towards algorithms that minimize the energy consumption of the deployed services, either by selecting more energy efficient devices or by turning off those network devices that remained unused. VNE problem constraints were typically summarized in a set of resources/energy constraints, and the solutions differed on which objectives functions were aimed for. But that was before 5th generation of mobile networks (5G) were considered in the VNE problem. With the appearance of 5G, new network services and use cases started to emerge. The standards talked about Ultra Reliable Low Latency Communication (Ultra-Reliable and Low Latency Communications (URLLC)) with latencies below few milliseconds and 99.999% reliability, an enhanced mobile broadband (enhanced Mobile Broadband (eMBB)) with significant data rate increases, and even the consideration of massive machine-type communications (Massive Machine-Type Communications (mMTC)) among Internet of Things (IoT) devices. Moreover, paradigms such as edge and fog computing blended with the 5G technology to introduce the idea of having computing devices closer to the end users. As a result, the VNE problem had to incorporate the new requirements as constraints to be taken into account, and every solution should either satisfy low latencies, high reliability, or larger data rates. This thesis studies the VNE problem, and proposes some heuristics tackling the constraints related to 5G services in Edge and fog scenarios, that is, the proposed solutions assess the assignment of Virtual Network Functions to resources, and the traffic steering across 5G infrastructures that have Edge and Fog devices. To evaluate the performance of the proposed solutions, the thesis studies first the generation of graphs that represent 5G networks. The proposed mechanisms to generate graphs serve to represent diverse 5G scenarios. In particular federation scenarios in which several domains share resources among themselves. The generated graphs also represent edge servers, so as fog devices with limited battery capacity. Additionally, these graphs take into account the standard requirements, and the expected demand for 5G networks. Moreover, the graphs differ depending on the density of population, and the area of study, i.e., whether it is an industrial area, a highway, or an urban area. After detailing the generation of graphs representing the 5G networks, this thesis proposes several NFV orchestration algorithms to tackle the VNE problem. First, it focuses on federation scenarios in which network services should be assigned not only to a single domain infrastructure, but also to the shared resources of the federation of domains. Two different problems are studied, one being the VNE itself over a federated infrastructure, and the other the delegation of network services. That is, whether a network service should be deployed in a local domain, or in the pool of resources of the federation domain; knowing that the latter charges the local domain for hosting the network service. Second, the thesis proposes OKpi, a NFV orchestration algorithm to meet 5G network slices quality of service. Conceptually, network slicing consists in splitting the network so network services are treated differently based on the slice they belong to. For example, an eHealth network slice will allocate the network resources necessary to meet low latencies for network services such as remote surgery. Each network slice is devoted to specific services with very concrete requirements, as high reliability, location constraints, or 1ms latencies. OKpi is a NFV orchestration algorithm that meets the network service requirements among different slices. It is based on a multi-constrained shortest path heuristic, and its solutions satisfy latency, reliability, and location constraints. After presenting OKpi, the thesis tackles the VNE problem in 5G networks with static/moving fog devices. The presented NFV orchestration algorithm takes into account the limited computing resources of fog devices, as well as the out-of-coverage problems derived from the devices’ mobility. To conclude, this thesis studies the scaling of Vehicle-to-Network (V2N) services, which require low latencies for network services as collision avoidance, hazard warning, and remote driving. For these services, the presence of traffic jams, or high vehicular traffic congestion lead to the violation of latency requirements. Hence, it is necessary to anticipate to such circumstances by using time-series techniques that allow to derive the incoming vehicular traffic flow in the next minutes or hours, so as to scale the V2N service accordingly.The 5G Exchange (5GEx) project (2015-2018) was an EU-funded project (H2020-ICT-2014-2 grant agreement 671636). The 5G-TRANSFORMER project (2017-2019) is an EU-funded project (H2020-ICT-2016-2 grant agreement 761536). The 5G-CORAL project (2017-2019) is an EU-Taiwan project (H2020-ICT-2016-2 grant agreement 761586).Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Ioannis Stavrakakis.- Secretario: Pablo Serrano Yáñez-Mingot.- Vocal: Paul Horatiu Patra

    Development and Evaluation of a Software System for Fire Risk Prediction

    Get PDF
    Masteroppgave i Programutvikling samarbeid med HVLPROG399MAMN-PRO

    Partage d'infrastructures et convergence fixe/mobile dans les réseaux 3GPP de prochaine génération

    Get PDF
    RÉSUMÉ Le déploiement de la technologie cellulaire de quatrième génération a débuté par quelques projets pilotes, notamment en Suède et en Norvège, dans la première moitié de 2010. Ces réseaux offrent dans un premier temps l’accès à Internet uniquement et comptent sur les réseaux de deuxième et troisième génération existants pour le support de la téléphonie et de la messagerie texte. Ce ne sera donc qu’avec l’avènement du IP Multimedia Subsystem (IMS) que tous les services seront supportés par la nouvelle architecture basée entièrement sur IP. Les réseaux mobiles de quatrième génération promettent aux usagers des taux de transfert au-delà de 100 Mbits/s en amont, lorsque l’usager est immobile, et le support de la qualité de service permettant d’offrir des garanties de débit, délai maximum, gigue maximale et d’un taux de perte de paquets borné supérieurement. Ces réseaux supporteront efficacement les applications utilisant la géolocalisation afin d’améliorer l’expérience de l’usager. Les terminaux d’aujourd’hui offrent un éventail de technologies radio. En effet, en plus du modem cellulaire, les terminaux supportent souvent la technologie Bluetooth qui est utilisée pour connecter entre autres les dispositifs mains-libres et les écouteurs. De plus, la majorité des téléphones cellulaires sont dotés d’un accès WiFi permettant à l’usager de transférer de grands volumes de données sans engorger le réseau cellulaire. Toutefois, cet accès n’est souvent réservé qu’au réseau résidentiel de l’usager ou à celui de son lieu de travail. Enfin, une relève verticale est presque toujours manuelle et entraîne pour le mobile un changement d’adresse IP, ce qui ultimement a pour conséquence une déconnexion des sessions en cours. Depuis quelques années, une tendance se profile au sein de l’industrie qui est connue sous de nom de convergence des réseaux fixes et mobiles. Cette tendance vise à plus ou moins long terme d’offrir l’accès Internet et la téléphonie à partir d’un seul terminal pouvant se connecter à un réseau d’accès local ou au réseau cellulaire. à ce jour, très peu d’opérateurs (e.g., NTT Docomo) offrent des terminaux ayant la possibilité de changer de point d’accès. Toutefois, le point d’accès doit appartenir à l’usager ou se situe à son lieu de travail. Par ailleurs, on remarque un mouvement de convergence selon lequel différents réseaux utilisés pour les services d’urgence (tels que la police, les pompiers et ambulanciers) sont progressivement migrés (en raison de leurs coûts prohibitifs) vers un seul réseau offrant un très haut niveau de redondance et de fiabilité. Les services d’urgence démontrent des besoins en QoS similaires à ceux des particuliers sauf qu’ils nécessitent un accès prioritaire, ce qui peut entraîner la déconnexion d’un usager non-prioritaire lors d’une situation de congestion. En plus des services publics qui tentent de réduire leurs coûts d’exploitation en partageant l’accès aux réseaux commerciaux de communications, les opérateurs de ces réseaux sont aussi entrés dans une phase de réduction de coûts. Cette situation résulte du haut niveau de maturité maintenant atteint par l’industrie des communications mobiles. Par exemple, l’image de marque ou la couverture offerte par chacun d’eux ne constituent plus en soi un argument de vente suffisant pour attirer une nouvelle clientèle. Ceux-ci doivent donc se distinguer par une offre de services supérieure à celle de leur compétition. Les opérateurs ont donc entrepris de sous-traiter des opérations non-critiques de leur entreprise afin de se concentrer sur l’aspect le plus profitable de cette dernière. Parallèlement à cette tendance, les opérateurs ont commencé à partager une portion de plus en plus importante de leurs infrastructures physiques avec leurs compétiteurs. Dans un premier temps, le partage s’est limité aux sites des stations de base et aux mâts qui supportent les antennes. Puis vint le partage des abris pour réduire les coûts de climatisation et d’hébergement des équipements. Ensuite, les opérateurs se mirent à partager les équipements radio, chacun contrôlant toutefois ses propres bandes de fréquences. . . Le partage des infrastructures physiques au-delà du premier nœud du réseau cœur n’est pas actuellement supporté en standardisation. Les propositions existantes d’architectures de réseaux de prochaine génération ont toutes comme point en commun d’être basées sur un réseau cœur tout-IP, d’offrir une QoS aux applications et une performance de l’ordre de 100 Mbits/s. De plus, ces dernières proposent des mécanismes de gestion des politiques qui définissent l’utilisation des services offerts aux abonnés ainsi que la façon de comptabiliser l’usage des ressources du réseau. On dénombre trois grandes catégories de politiques : celles se rattachant à l’usager (e.g., les abonnements or/argent/bronze, accès facturé vs. prépayé), celles qui dépendent du service demandé (e.g., pour un service donné, la bande passante maximale, la classe de service et la priorité d’allocation et de rétention des ressources) et enfin les politiques relatives à l’état du réseau (e.g., niveau de congestion, répartition des agrégats de trafic, etc). Dans un premier article dont le titre est « A Potential Evolution of the Policy and Charging Control/QoS Architecture for the 3GPP IETF-based Evolved Packet Core », les aspects de FMC ainsi que du partage du réseau cœur sont traités conjointement puisqu’il faut que l’architecture PCC reflète les réalités des tendances de l’industrie décrites précédemment. Suite à la description des tendances de l’industrie furent présentés les requis d’une architecture PCC qui rendent possibles la convergence des services (capacité d’utiliser un service à partir de n’importe quel accès), le partage du réseau cœur par plusieurs opérateurs mobiles virtuels , la création de politiques propres à chaque réseau d’accès ainsi que la micro-mobilité efficace des usagers dans les scénarios d’itinérance. Dans un second temps, deux architectures de NGN furent évaluées en fonction des requis énumérés ci-dessus. Cette étude permit de déterminer qu’une solution hybride (avec les avantages de chacune mais sans leurs défauts respectifs) constituait une piste de solution prometteuse qui servit de base à notre proposition. La solution proposée atteint son but par une meilleure répartition des rôles d’affaires ainsi que par l’introduction d’une entité centrale de contrôle nommée Network Policy Function (NPF) au sein du réseau de transport IP. En effet, les rôles d’affaires définis (fournisseurs d’accès, de réseau cœur et de services) permettent la création de domaines de politiques et administratifs distincts. Ces rôles deviennent nécessaires dans les cas de partage d’infrastructures. Dans le cas contraire, ils sont compatibles avec le modèle vertical actuel d’opérateur ; ce dernier joue alors tous les rôles. Quant à l’introduction du NPF dans le réseau cœur, celui-ci permet de séparer la gestion des politiques régissant le réseau de transport IP des usagers, des services et des réseaux d’accès. De plus, il permet le partage du réseau cœur de façon à respecter les ententes de services liant ce dernier à chaque opérateur virtuel ainsi que les ententes de services liant le réseau cœur et le(s) réseau(x) d’accès. Par ailleurs, le NPF permet d’ajouter au réseau cœur des services avancés à partager entre plusieurs opérateurs. Parmi ces services, on retrouve des fonctions de transcodage audio/vidéo, des caches de fichiers (e.g., pouvant servir à la distribution de films), d’antivirus grâce à l’inspection approfondie des paquets, etc. L’avantage d’introduire ces services au niveau transport est de permettre autant aux applications IMS qu’aux autres d’en bénéficier. Le second article intitulé « A Network Policy Function Node for a Potential Evolution of the 3GPP Evolved Packet Core » constitue une extension du premier article qui décrit en détail les tendances de l’industrie, les architectures de gestion de politiques existantes et leurs caractéristiques, et enfin offrit un survol de la solution. En contre-partie, le second article aborde beaucoup plus en détail les impacts de la solution proposée sur l’architecture existante. En effet, une contribution significative de ce second article est de dresser la liste exhaustive de toutes les simplifications potentielles que permet la proposition d’architecture. La contribution majeure du second article est que la solution proposée peut être déployée immédiatement avec un minimum d’impacts. Effectivement, une petite modification à l’architecture proposée dans le premier article, au niveau des interfaces du NPF, permit cette avancée. En conséquence, cette modification réconcilie les deux variantes actuelles d’architecture basées sur les protocoles GPRS Tunneling Protocol (GTP) et Proxy Mobile IPv6 (PMIPv6). Le dernier apport important du second article est la démonstration du fonctionnement interne du NPF lorsque ce dernier contrôle un réseau de transport basé sur un mécanisme de tunnels tels que Multi-Protocol Label Switching (MPLS) ou encore Provider Backbone Bridge-Traffic Engineering (PBB-TE). Un processus d’ingénierie de trafic permet aux flux de trafic de contourner une zone de congestion, de mieux balancer la charge du réseau et d’assurer que les exigences en QoS sont toujours respectées. Le troisième article intitulé « A MultiAccess Resource ReSerVation Protocol (MARSVP) for the 3GPP Evolved Packet System » traite de QoS dans les scénarios de FMC, plus particulièrement des applications qui ne sont pas supportées par le réseau. Par exemple, toutes les applications pair-à-pair qui représentent une portion infime du volume de trafic total attribué à ce type d’application ou celles qui sont naissantes et encore méconnues. Les réseaux de deuxième et troisième générations ont été conçus de telle sorte que l’usager fournit au réseau les paramètres de QoS de l’application. Toutefois, le nombre de combinaisons des paramètres de QoS était très élevé et trop complexe à gérer. Il en résulta que pour la quatrième génération il fut décidé que dorénavant ce seraient les serveurs d’applications dans le réseau qui fourniraient ces paramètres de QoS. De même, un nombre restreint de classes de services fut défini, ce qui eut pour résultat de simplifier énormément la gestion de la QoS. Lorsque sont considérés les concepts de FMC, il devient évident que le mécanisme décrit ci-dessus ne s’applique qu’aux accès 3GPP. En effet, chaque type d’accès définit ses propres mécanismes qui doivent souvent être contrôlés par le réseau et non par l’usager. De plus, certains accès ne disposent d’aucun canal de contrôle sur lequel circule les requêtes de QoS. De même, les protocoles existants de QoS sont souvent lourds et définis de bout-en-bout ; ils ne sont donc pas appropriés à l’utilisation qui est envisagée. En conséquence, la solution proposée consiste en un nouveau protocole multiaccès de réservation de ressources. MARSVP utilise le canal de données que l’on retrouve sur tous les accès et confine les échanges de messages entre l’usager et le premier nœud IP. Les besoins en QoS sont définis en fonction des QoS Class Indicators (QCIs) ce qui rend MARSVP simple à utiliser. Suite à une requête de réservation de ressources acceptée par le réseau, ce dernier configure l’accès et retourne au terminal les informations requises à l’envoi paquets (aux couches 2 et 3).----------ABSTRACT Fourth generation cellular networks trials have begun in the first half of 2010, notably in Sweden and Norway. As a first step, these networks only offer Internet access and rely on existing second and third generation networks for providing telephony and text messaging. It’s only after the deployment of the IP Multimedia Subsystem (IMS) that all services shall be supported on the new all-IP architecture. Fourth generation mobile networks should enable end users to benefit from data throughputs of at least 100 Mbps on the downlink, when the user is stationary, and of Quality of Service (QoS) support that allows guarantees on throughput, maximum delay, maximum jitter and on the packet loss rate. These networks will efficiently support applications that rely on geolocation in order to improve the user’s Quality of Experience (QoE). Today’s terminals can communicate using several radio technologies. Indeed, in addition to the cellular modem, terminals often support the Bluetooth technology which is used for connecting handsfree devices and headsets. Moreover, most cell phones feature a Wi-Fi interface that enables users to transfer huge volumes of data without congesting the cellular network. However, Wi-Fi connectivity is often restricted to the user’s home network or his workplace. Finally, a vertical handover is nearly always done manually and forces the terminal to change its IP address, which ultimately disrupts all active data sessions. A trend has emerged a few years ago among the mobile communications industry known as Fixed-Mobile Convergence (FMC). FMC is a trend aiming to provide Internet access and telephony on a single device capable of switching between local- and wide-area networks. At this time, very few operators (e.g., NTT Docomo) offer terminals capable of switching to another access automatically. However, the access point must belong to the user or be installed in his workplace. At the same time, another kind of convergence has begun in which the dedicated networks for public safety (such as police, fire prevention and ambulances) are being progressively migrated (because of their high operational costs) toward a single highly reliable and redundant network. Indeed, these services exhibit QoS requirements that are similar to residential costumers’ except they need a prioritized access, and that can terminate a non-priority user’s session during congestion situations. In addition to the public services that seek to reduce their operational costs by sharing commercial communications networks, the network operators have also entered a cost reduction phase. This situation is a result of the high degree of maturity that the mobile communications industry has reached. As an example, the branding or the coverage offered by each of them isn’t a sufficient sales argument anymore to enroll new subscribers. Operators must now distinguish themselves from their competition with a superior service offering. Some operators have already started to outsource their less profitable business activities in order to concentrate on their key functions. As a complement to this trend, operators have begun to share an ever increasing portion of their physical infrastructures with their competitors. As a first step, infrastructure sharing was limited to the base station sites and antenna masts. Later, the shelters were shared to further reduce the cooling and hosting costs of the equipments. Then, operators started to share radio equipments but each of them operated on different frequency bands. . . Infrastructure sharing beyond the first core network node isn’t actually supported in standardization. There is an additional trend into the mobile communications industry which is the specialization of the operators (i.e., the identification of target customers by the operators). As a result, these operators experience disjoint traffic peaks because their customer bases have different behaviors. The former have a strong incentive to share infrastructures because network dimensioning mostly depends on the peak demand. Consequently, sharing infrastructures increases the average traffic load without significantly increasing the peak load because the peaks occur at different times. This allows operators to boost their return on investment. Every existing Next Generation Network (NGN) architecture proposal features an all-IP core network, offers QoS to applications and a bandwidth on the downlink in the order of 100 Mbps. Moreover, these NGNs propose a number of Policy and Charging Control (PCC) mechanisms that determine how services are delivered to the subscribers and what charging method to apply. There are three main categories of policies: those that are related to the subscriber (e.g., gold/silver/bronze subscription, prepaid vs. billed access), those that apply to services (e.g., for a given service, bandwidth limitation, QoS class assignment, allocation and retention priority of resources) and finally policies that depend on the current state of the network (e.g., congestion level, traffic engineering, etc). In a first paper entitled “A Potential Evolution of the Policy and Charging Control/QoS Architecture for the 3GPP IETF-based Evolved Packet Core ”, FMC and Core Network (CN) sharing aspects are treated simultaneously because it is important that the logical PCC architecture reflects the realities of the industry trends described above. Following the description of the trends in the communications industry were presented a list of four requirements that enable for a PCC architecture: service convergence (capacity to use a service from any type of access), CN sharing that allows several Mobile Virtual Network Operators (MVNOs) to coexist, the creation of local access network policies as well as efficient micro-mobility in roaming scenarios. As a second step, two NGN architectures were evaluated upon the requirements mentioned above. This evaluation concluded that a hybrid solution (based on the key features of each architecture but without their respective drawbacks) would offer a very promising foundation for a complete solution. The proposed solution achieved its goal with a clearer separation of the business roles (e.g., access and network providers) and the introduction of a Network Policy Function (NPF) for the management of the CN. Indeed, the business roles that were defined allow the creation of distinct policy/QoS and administrative domains. The roles become mandatory in infrastructure sharing scenarios. Otherwise, they maintain the compatibility with the actual vertically-integrated operator model; the latter then plays all of the business roles. Introducing the NPF into the CN enables the CN policy management to be separated from policy management related to subscribers, services and access networks. Additionally, the NPF allows the CN to be shared by multiple Network Service Providers (NSPs) and respect the Service Level Agreements (SLAs) that link the IP Aggregation Network (IPAN) to the NSPs, as well as those that tie the IPAN to the Access Network Providers (ANPs). Another benefit of the NPF is that it can share a number of advanced functions between several NSPs. Those functions include audio/video transcoding, file caches (e.g., that can be used for multimedia content delivery), Deep Packet Inspection (DPI) antivirus, etc. The main advantage to integrate those infrastructure services at the IP transport level is to allow both IMS and non-IMS applications to benefit from them. A second paper entitled “A Network Policy Function Node for a Potential Evolution of the 3GPP Evolved Packet Core ” constitutes an extension of the first paper that extensively described the industry trends, two existing PCC architectures and their characteristics, and finally offered an overview of the proposed solution. On the other hand, the second paper thoroughly describes all of the impacts that the proposal has on the existing 3GPP PCC architecture. Indeed, a significant contribution of this second paper is that it provides an extensive list of potential simplifications that the proposed solution allows. The main contribution of the second paper is that from now on the proposed solution can be deployed over an existing PCC architecture with a minimum of impacts. Indeed, a small modification to the NPF’s reference points enables this enhancement. As a consequence, this enhancement provided a solution that is compatible with both PCC architecture variants, based on either GPRS Tunneling Protocol (GTP) or Proxy Mobile IPv6 (PMIPv6). A last contribution of the second paper is to demonstrate the NPF’s internals when the former is controlling a an IPAN based on tunneling mechanisms such as Multi-Protocol Label Switching (MPLS) or Provider Backbone Bridge-Traffic Engineering (PBB-TE). A traffic engineering process allows traffic flow aggregates to pass around a congested node, to better balance the load between the network elements and make sure that the QoS requirements are respected at all times. The third paper entitled “A MultiAccess Resource ReSerVation Protocol (MARSVP) for the 3GPP Evolved Packet System” deals with QoS provisioning in FMC scenarios, especially for applications that are not directly supported by the network. As an example, all peer-to-peer applications (such as online gaming) that represent a small fraction of the total peer-to-peer traffic or those that are new and relatively unknown. Second and third generation networks were designed such that the User Equipment (UE) would provide the network with the application’s QoS parameters. However, the number of possible combinations of QoS parameters was very large and too complex to manage. As a result, for the fourth generation of networks, an application server would provide the PCC architecture with the right QoS parameters. In addition, a limited number of QoS classes were defined which in the end greatly simplified QoS management. When FMC aspects are taken into account, it becomes trivial that the above mechanism only applies to 3GPP accesses. Indeed, each access type uses its own mechanisms that must often be controlled by the network instead of the user. Moreover, some accesses don’t feature a control channel on which QoS reservation requests would be carried. Also, existing QoS protocols are often too heavy to support and apply

    Redes de nova geração e o serviço universal de telecomunicações em Portugal

    Get PDF
    Doutoramento em Engenharia EletrónicaThis thesis addresses the issue of Universal Service for telecommunications in the context of the access networks of next generation. This work aims to contribute to the redefinition of the concept of universal telecommunications service focusing primarily on extending it to broadband services as economic and social development factor and taking into account the degree of dependence that currently, modern societies have for the different communication and information services. Complementarily it also intended to meet some of the challenges set out in the European 2020 agenda. Universal Service is defined here as access to a telecommunications network (with obligations in terms of type and quality of service for the operator), by of all citizens at any country's geographical location, with uniform and accessible price. The approach adopted is the State as a mentor for social equity, respectful of the liberalized market dynamics but also knowledgeable of the requirements of modern telecommunications services and its relationship with the different technologies available. The possibility of subsidizing is assumed. The Universal Service´s provision is subject to open to all operators, which are assumed to possess other profitability businesses, than the Universal Service, using technologies similar to those prescribed for the respective Universal Service provision contest. Although the work has components of economic and financial analysis, the approach is the engineering point of view, looking for help to identify technical and organizational solutions which offer prospects for the dissemination and adoption of next generation network solutions. As a point of departure the work gives an overview on the state of the art access networks , trying to identify which of the differences between this reality and possible scenarios for next-generation network with potential access to the generality of the people . The case of the Portuguese reality will be given special attention, taking into account their specific characteristics in terms of geography, demography, economics and market dynamics. The main results of this work are: • Identification of possible scenarios for the evolution of existing networks, in particular in areas with deficit coverage. • Identification of possible operating models and business to the materialization of the above scenarios developed and its economic analysis in an attempt to determine the critical factors associated with sustainability and / or need for subsidies. • Contribution to the regulatory framework of new generation networks from the point of view of the constraints of technology and the specifics of the Universal Service.Esta tese aborda a questão do serviço universal de telecomunicações no contexto das redes de acesso de nova geração. Este trabalho pretende contribuir para a redefinição do conceito de Serviço Universal de Telecomunicações concentrando-se principalmente em estendê-lo a serviços de banda larga como factor de desenvolvimento económico e social e tendo em conta o grau de dependência que, actualmente, as sociedades modernas têm em relação aos diferentes serviços de comunicação e informação. De forma complementar pretende-se também ir ao encontro de alguns dos desafios enunciados na Agenda Europeia 2020. Serviço Universal é aqui definido como o acesso a uma rede de telecomunicações (com obrigações em termos de tipo e qualidade de serviço para o operador), por parte de todos os cidadãos, em qualquer localização geográfica do país, a preços uniformes e acessíveis. A perspectiva adoptada é a Estatal como mentor da equidade social, respeitador das dinâmicas de mercado liberalizado mas também conhecedor dos requisitos dos modernos serviços de telecomunicações e da sua relação com as diferentes tecnologias disponíveis. A possibilidade de subsidiação é assumida. A prestação de Serviço Universal é sujeita a concurso aberto a todos os operadores, que se assume possuírem outros negócios, que não apenas o Serviço Universal, com rentabilidade e usando tecnologias semelhantes às preconizadas para a respectiva prestação de Serviço Universal. Embora o trabalho desenvolvido tenha componentes de análise económico-financeira, a abordagem utilizada é a de engenharia, procurando contribuir para a identificação de soluções técnicas e organizacionais que possam oferecer perspectivas sustentáveis para a disseminação e adopção das soluções redes de nova geração. Como ponto de partida o trabalho apresenta uma visão geral sobre o estado da arte das redes de acesso, procurando identificar quais os diferenciais existentes entre essa realidade e a de possíveis cenários de rede de próxima geração com potencial de acesso para a generalidade dos cidadãos. O caso da realidade Portuguesa será objecto de uma atenção especial, tendo em consideração as suas especificidades em termos de geografia, demografia, economia e dinâmicas do mercado. Os principais resultados deste trabalho são os seguintes: • Identificação de possíveis cenários para a evolução das redes actuais, nomeadamente em áreas com deficit de cobertura de rede. • Identificação de possíveis modelos de operação e negócio para a materialização dos cenários acima desenvolvidos e respectiva análise económica, como tentativa de determinar os factores críticos associados à sua sustentabilidade e /ou necessidade de subsidiação. • Contributo para o quadro regulatório das Redes de Nova Geração sob o ponto de vista dos constrangimentos das tecnologias e das especificidades do Serviço Universal

    Design of Multi-Gigabit Network Interconnect Elements and Protocols for a Data Acquisition System in Radiation Environments

    Get PDF
    Modern High Energy Physics experiments (HEP) explore the fundamental nature of matter in more depth than ever before and thereby benefit greatly from the advances in the field of communication technology. The huge data volumes generated by the increasingly precise detector setups pose severe problems for the Data Acquisition Systems (DAQ), which are used to process and store this information. In addition, detector setups and their read-out electronics need to be synchronized precisely to allow a later correlation of experiment events accurately in time. Moreover, the substantial presence of charged particles from accelerator-generated beams results in strong ionizing radiation levels, which has a severe impact on the electronic systems. This thesis recommends an architecture for unified network protocol IP cores with custom developed physical interfaces for the use of reliable data acquisition systems in strong radiation environments. Special configured serial bidirectional point-to-point interconnects are proposed to realize high speed data transmission, slow control access, synchronization and global clock distribution on unified links to reduce costs and to gain compact and efficient read-out setups. Special features are the developed radiation hardened functional units against single and multiple bit upsets, and the common interface for statistical error and diagnosis information, which integrates well into the protocol capabilities and eases the error handling in large experiment setups. Many innovative designs for several custom FPGA and ASIC platforms have been implemented and are described in detail. Special focus is placed on the physical layers and network interface elements from high-speed serial LVDS interconnects up to 20 Gb/s SSTL links in state-of-the-art process technology. The developed IP cores are fully tested by an adapted verification environment for electronic design automation tools and also by live application. They are available in a global repository allowing a broad usage within further HEP experiments
    corecore