1,079 research outputs found

    Managing inequality: the political ecology of a small-scale fishery, Mweru-Luapula, Zambia

    Get PDF
    This paper starts from the perspective on resource management approaches as based upon a body of environmental knowledge. By analysing fisheries management in Mweru-Luapula, Zambia, we argue that this body of environmental knowledge has (i) remained largely unchanged throughout the recent shift to co-management and (ii) is to a great extent based upon general paradigmatic conventions with regard to common property regimes. We therefore simultaneously studied the historical trajectories of both resource management as the political ecology of Mweru-Luapula’s fishing economy. Using a relational perspective – by looking at interaction of the local fishing economy with external developments, but also by examining socioeconomic relations between individual actors – this study exposes constraints and incentives within the local fishing economy that are not absorbed in the current co-management regime. These findings challenge both policy goals as community-based resource management itself. We therefore argue that governance of small-scale fisheries – in order to close the gap between locally based understandings, policy and legislation – should always be built upon all dimensions (social, economic, ecological, political) that define a fisheries system

    Managing ubiquitous eco cities: the role of urban telecommunication infrastructure networks and convergence technologies

    Get PDF
    A successful urban management system for a Ubiquitous Eco City requires an integrated approach. This integration includes bringing together economic, socio-cultural and urban development with a well orchestrated, transparent and open decision making mechanism and necessary infrastructure and technologies. Rapidly developing information and telecommunication technologies and their platforms in the late 20th Century improves urban management and enhances the quality of life and place. Telecommunication technologies provide an important base for monitoring and managing activities over wired, wireless or fibre-optic networks. Particularly technology convergence creates new ways in which the information and telecommunication technologies are used. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices such as mobile phones and provides opportunities in the management of Ubiquitous Eco Cities. This paper discusses the recent developments in telecommunication networks and trends in convergence technologies and their implications on the management of Ubiquitous Eco Cities and how this technological shift is likely to be beneficial in improving the quality of life and place. The paper also introduces recent approaches on urban management systems, such as intelligent urban management systems, that are suitable for Ubiquitous Eco Cities

    Experimental SDN Control Solutions for Automatic Operations and Management of 5G Services in a Fixed Mobile Converged Packet-Optical Network

    Get PDF
    5G networks will impose network operators to accommodate services demanding heterogeneous and stringent requirements in terms of increased bandwidth, reduced latency, higher availability, etc. as well as enabling emerging capabilities such as slicing. Operators will be then forced to make notable investments in their infrastructure but the revenue is not envisaged to be proportional. Thereby, operators are seeking for more cost-effective solutions to keep their competitiveness. An appealing solution is to integrate all (broadband) services including both fixed and mobile in a convergent way. This is referred to as Fixed Mobile Convergence (FMC). FMC allows seamlessly serving any kind of access service over the same network infrastructure (access, aggregation and core) and relying on common set of control and operation functions. To this end, FMC leverages the benefits provided by Software Defined Networking (SDN) and Network Function Virtualization (NFV). First, we discuss some of the explored FMC solutions and technologies, from both structural and functional perspectives Next, focusing on a Multi-Layer (Packet and Optical) Aggregation Network, we report two implemented and experimentally validated SDN/NFV orchestration architectures providing feasibleThis work has been partially funded by the Spanish Ministry MINECO projects DESTELLO (TEC2015-69256-R) and 5G-REFINE (TEC2017-88373-R), and the EU H2020 5G TRANSFORMER project (grant no. 761536)

    Thermal Aware Design Automation of the Electronic Control System for Autonomous Vehicles

    Get PDF
    The autonomous vehicle (AV) technology, due to its tremendous social and economical benefits, is transforming the entire world in the coming decades. However, significant technical challenges still need to be overcome until AVs can be safely, reliably, and massively deployed. Temperature plays a key role in the safety and reliability of an AV, not only because a vehicle is subjected to extreme operating temperatures but also because the increasing computations demand more powerful IC chips, which can lead to higher operating temperature and large thermal gradient. In particular, as the underpinning technology for AV, artificial intelligence (AI) requires substantially increased computation and memory resources, which have been growing exponentially through recent years and further exacerbated the thermal problems. High operating temperature and large thermal gradient can reduce the performance, degrade the reliability, and even cause an IC to fail catastrophically. We believe that dealing with thermal issues must be coupled closely in the design phase of the AVs’ electronic control system (ECS). To this end, first, we study how to map vehicle applications to ECS with heterogeneous architecture to satisfy peak temperature constraints and optimize latency and system-level reliability. We present a mathematical programming model to bound the peak temperature for the ECS. We also develop an approach based on the genetic algorithm to bound the peak temperature under varying execution time scenarios and optimize the system-level reliability of the ECS. We present several computationally efficient techniques for system-level mean-time-to-failure (MTTF) computation, which show several orders-of-magnitude speed-up over the state-of-the-art method. Second, we focus on studying the thermal impacts of AI techniques. Specifically, we study how the thermal impacts for the memory bit flipping can affect the prediction accuracy of a deep neural network (DNN). We develop a neuron-level analytical sensitivity estimation framework to quantify this impact and study its effectiveness with popular DNN architectures. Third, we study the problem of incorporating thermal impacts into mapping the parameters for DNN neurons to memory banks to improve prediction accuracy. Based on our developed sensitivity metric, we develop a bin-packing-based approach to map DNN neuron parameters to memory banks with different temperature profiles. We also study the problem of identifying the optimal temperature profiles for memory systems that can minimize the thermal impacts. We show that the thermal aware mapping of DNN neuron parameters on memory banks can significantly improve the prediction accuracy at a high-temperature range than the thermal ignorant for state-of-the-art DNNs

    Edge caching with mobility prediction in virtualized LTE mobile networks

    Get PDF
    Abstract Mobile Edge Computing enables the deployment of services, applications, content storage and processing in close proximity to mobile end users. This highly distributed computing environment can be used to provide ultra-low latency, precise positional awareness and agile applications, which could significantly improve user experience. In order to achieve this, it is necessary to consider next-generation paradigms such as Information-Centric Networking and Cloud Computing, integrated with the upcoming 5th Generation networking access. A cohesive end-to-end architecture is proposed, fully exploiting Information-Centric Networking together with the Mobile Follow-Me Cloud approach, for enhancing the migration of content-caches located at the edge of cloudified mobile networks. The chosen content-relocation algorithm attains content-availability improvements of up to 500 when a mobile user performs a request and compared against other existing solutions. The performed evaluation considers a realistic core-network, with functional and non-functional measurements, including the deployment of the entire system, computation and allocation/migration of resources. The achieved results reveal that the proposed architecture is beneficial not only from the users’ perspective but also from the providers point-of-view, which may be able to optimize their resources and reach significant bandwidth savings

    Enabling Mobile Service Continuity across Orchestrated Edge Networks

    Get PDF
    Edge networking has become an important technology for providing low-latency services to end users. However, deploying an edge network does not guarantee continuous service for mobile users. Mobility can cause frequent interruptions and network delays as users leave the initial serving edge. In this paper, we propose a solution to provide transparent service continuity for mobile users in large-scale WiFi networks. The contribution of this work has three parts. First, we propose ARNAB architecture to achieve mobile service continuity. The term ARNAB means rabbit in Arabic, which represents an Architecture for Transparent Service Continuity via Double-tier Migration. The first tier migrates user connectivity, while the second tier migrates user containerized applications. ARNAB provides mobile services just like rabbits hop through the WiFi infrastructure. Second, we identify the root-causes for prolonged container migration downtime. Finally, we enhance the container migration scheme by improving system response time. Our experimental results show that the downtime of ARNAB container migration solution is 50% shorter than that of the state-of-the-art migration.This work has been partially funded by the H2020 Europe/Taiwan joint action 5G-DIVE (Grant #859881) and also partially funded by the Ministry of Science and Technology, under the Grant Number MOST 108-2634-F-009-006 - through Pervasive Artificial Intelligence Research (PAIR) Labs, Taiwan

    Partage d'infrastructures et convergence fixe/mobile dans les réseaux 3GPP de prochaine génération

    Get PDF
    RÉSUMÉ Le déploiement de la technologie cellulaire de quatrième génération a débuté par quelques projets pilotes, notamment en Suède et en Norvège, dans la première moitié de 2010. Ces réseaux offrent dans un premier temps l’accès à Internet uniquement et comptent sur les réseaux de deuxième et troisième génération existants pour le support de la téléphonie et de la messagerie texte. Ce ne sera donc qu’avec l’avènement du IP Multimedia Subsystem (IMS) que tous les services seront supportés par la nouvelle architecture basée entièrement sur IP. Les réseaux mobiles de quatrième génération promettent aux usagers des taux de transfert au-delà de 100 Mbits/s en amont, lorsque l’usager est immobile, et le support de la qualité de service permettant d’offrir des garanties de débit, délai maximum, gigue maximale et d’un taux de perte de paquets borné supérieurement. Ces réseaux supporteront efficacement les applications utilisant la géolocalisation afin d’améliorer l’expérience de l’usager. Les terminaux d’aujourd’hui offrent un éventail de technologies radio. En effet, en plus du modem cellulaire, les terminaux supportent souvent la technologie Bluetooth qui est utilisée pour connecter entre autres les dispositifs mains-libres et les écouteurs. De plus, la majorité des téléphones cellulaires sont dotés d’un accès WiFi permettant à l’usager de transférer de grands volumes de données sans engorger le réseau cellulaire. Toutefois, cet accès n’est souvent réservé qu’au réseau résidentiel de l’usager ou à celui de son lieu de travail. Enfin, une relève verticale est presque toujours manuelle et entraîne pour le mobile un changement d’adresse IP, ce qui ultimement a pour conséquence une déconnexion des sessions en cours. Depuis quelques années, une tendance se profile au sein de l’industrie qui est connue sous de nom de convergence des réseaux fixes et mobiles. Cette tendance vise à plus ou moins long terme d’offrir l’accès Internet et la téléphonie à partir d’un seul terminal pouvant se connecter à un réseau d’accès local ou au réseau cellulaire. à ce jour, très peu d’opérateurs (e.g., NTT Docomo) offrent des terminaux ayant la possibilité de changer de point d’accès. Toutefois, le point d’accès doit appartenir à l’usager ou se situe à son lieu de travail. Par ailleurs, on remarque un mouvement de convergence selon lequel différents réseaux utilisés pour les services d’urgence (tels que la police, les pompiers et ambulanciers) sont progressivement migrés (en raison de leurs coûts prohibitifs) vers un seul réseau offrant un très haut niveau de redondance et de fiabilité. Les services d’urgence démontrent des besoins en QoS similaires à ceux des particuliers sauf qu’ils nécessitent un accès prioritaire, ce qui peut entraîner la déconnexion d’un usager non-prioritaire lors d’une situation de congestion. En plus des services publics qui tentent de réduire leurs coûts d’exploitation en partageant l’accès aux réseaux commerciaux de communications, les opérateurs de ces réseaux sont aussi entrés dans une phase de réduction de coûts. Cette situation résulte du haut niveau de maturité maintenant atteint par l’industrie des communications mobiles. Par exemple, l’image de marque ou la couverture offerte par chacun d’eux ne constituent plus en soi un argument de vente suffisant pour attirer une nouvelle clientèle. Ceux-ci doivent donc se distinguer par une offre de services supérieure à celle de leur compétition. Les opérateurs ont donc entrepris de sous-traiter des opérations non-critiques de leur entreprise afin de se concentrer sur l’aspect le plus profitable de cette dernière. Parallèlement à cette tendance, les opérateurs ont commencé à partager une portion de plus en plus importante de leurs infrastructures physiques avec leurs compétiteurs. Dans un premier temps, le partage s’est limité aux sites des stations de base et aux mâts qui supportent les antennes. Puis vint le partage des abris pour réduire les coûts de climatisation et d’hébergement des équipements. Ensuite, les opérateurs se mirent à partager les équipements radio, chacun contrôlant toutefois ses propres bandes de fréquences. . . Le partage des infrastructures physiques au-delà du premier nœud du réseau cœur n’est pas actuellement supporté en standardisation. Les propositions existantes d’architectures de réseaux de prochaine génération ont toutes comme point en commun d’être basées sur un réseau cœur tout-IP, d’offrir une QoS aux applications et une performance de l’ordre de 100 Mbits/s. De plus, ces dernières proposent des mécanismes de gestion des politiques qui définissent l’utilisation des services offerts aux abonnés ainsi que la façon de comptabiliser l’usage des ressources du réseau. On dénombre trois grandes catégories de politiques : celles se rattachant à l’usager (e.g., les abonnements or/argent/bronze, accès facturé vs. prépayé), celles qui dépendent du service demandé (e.g., pour un service donné, la bande passante maximale, la classe de service et la priorité d’allocation et de rétention des ressources) et enfin les politiques relatives à l’état du réseau (e.g., niveau de congestion, répartition des agrégats de trafic, etc). Dans un premier article dont le titre est « A Potential Evolution of the Policy and Charging Control/QoS Architecture for the 3GPP IETF-based Evolved Packet Core », les aspects de FMC ainsi que du partage du réseau cœur sont traités conjointement puisqu’il faut que l’architecture PCC reflète les réalités des tendances de l’industrie décrites précédemment. Suite à la description des tendances de l’industrie furent présentés les requis d’une architecture PCC qui rendent possibles la convergence des services (capacité d’utiliser un service à partir de n’importe quel accès), le partage du réseau cœur par plusieurs opérateurs mobiles virtuels , la création de politiques propres à chaque réseau d’accès ainsi que la micro-mobilité efficace des usagers dans les scénarios d’itinérance. Dans un second temps, deux architectures de NGN furent évaluées en fonction des requis énumérés ci-dessus. Cette étude permit de déterminer qu’une solution hybride (avec les avantages de chacune mais sans leurs défauts respectifs) constituait une piste de solution prometteuse qui servit de base à notre proposition. La solution proposée atteint son but par une meilleure répartition des rôles d’affaires ainsi que par l’introduction d’une entité centrale de contrôle nommée Network Policy Function (NPF) au sein du réseau de transport IP. En effet, les rôles d’affaires définis (fournisseurs d’accès, de réseau cœur et de services) permettent la création de domaines de politiques et administratifs distincts. Ces rôles deviennent nécessaires dans les cas de partage d’infrastructures. Dans le cas contraire, ils sont compatibles avec le modèle vertical actuel d’opérateur ; ce dernier joue alors tous les rôles. Quant à l’introduction du NPF dans le réseau cœur, celui-ci permet de séparer la gestion des politiques régissant le réseau de transport IP des usagers, des services et des réseaux d’accès. De plus, il permet le partage du réseau cœur de façon à respecter les ententes de services liant ce dernier à chaque opérateur virtuel ainsi que les ententes de services liant le réseau cœur et le(s) réseau(x) d’accès. Par ailleurs, le NPF permet d’ajouter au réseau cœur des services avancés à partager entre plusieurs opérateurs. Parmi ces services, on retrouve des fonctions de transcodage audio/vidéo, des caches de fichiers (e.g., pouvant servir à la distribution de films), d’antivirus grâce à l’inspection approfondie des paquets, etc. L’avantage d’introduire ces services au niveau transport est de permettre autant aux applications IMS qu’aux autres d’en bénéficier. Le second article intitulé « A Network Policy Function Node for a Potential Evolution of the 3GPP Evolved Packet Core » constitue une extension du premier article qui décrit en détail les tendances de l’industrie, les architectures de gestion de politiques existantes et leurs caractéristiques, et enfin offrit un survol de la solution. En contre-partie, le second article aborde beaucoup plus en détail les impacts de la solution proposée sur l’architecture existante. En effet, une contribution significative de ce second article est de dresser la liste exhaustive de toutes les simplifications potentielles que permet la proposition d’architecture. La contribution majeure du second article est que la solution proposée peut être déployée immédiatement avec un minimum d’impacts. Effectivement, une petite modification à l’architecture proposée dans le premier article, au niveau des interfaces du NPF, permit cette avancée. En conséquence, cette modification réconcilie les deux variantes actuelles d’architecture basées sur les protocoles GPRS Tunneling Protocol (GTP) et Proxy Mobile IPv6 (PMIPv6). Le dernier apport important du second article est la démonstration du fonctionnement interne du NPF lorsque ce dernier contrôle un réseau de transport basé sur un mécanisme de tunnels tels que Multi-Protocol Label Switching (MPLS) ou encore Provider Backbone Bridge-Traffic Engineering (PBB-TE). Un processus d’ingénierie de trafic permet aux flux de trafic de contourner une zone de congestion, de mieux balancer la charge du réseau et d’assurer que les exigences en QoS sont toujours respectées. Le troisième article intitulé « A MultiAccess Resource ReSerVation Protocol (MARSVP) for the 3GPP Evolved Packet System » traite de QoS dans les scénarios de FMC, plus particulièrement des applications qui ne sont pas supportées par le réseau. Par exemple, toutes les applications pair-à-pair qui représentent une portion infime du volume de trafic total attribué à ce type d’application ou celles qui sont naissantes et encore méconnues. Les réseaux de deuxième et troisième générations ont été conçus de telle sorte que l’usager fournit au réseau les paramètres de QoS de l’application. Toutefois, le nombre de combinaisons des paramètres de QoS était très élevé et trop complexe à gérer. Il en résulta que pour la quatrième génération il fut décidé que dorénavant ce seraient les serveurs d’applications dans le réseau qui fourniraient ces paramètres de QoS. De même, un nombre restreint de classes de services fut défini, ce qui eut pour résultat de simplifier énormément la gestion de la QoS. Lorsque sont considérés les concepts de FMC, il devient évident que le mécanisme décrit ci-dessus ne s’applique qu’aux accès 3GPP. En effet, chaque type d’accès définit ses propres mécanismes qui doivent souvent être contrôlés par le réseau et non par l’usager. De plus, certains accès ne disposent d’aucun canal de contrôle sur lequel circule les requêtes de QoS. De même, les protocoles existants de QoS sont souvent lourds et définis de bout-en-bout ; ils ne sont donc pas appropriés à l’utilisation qui est envisagée. En conséquence, la solution proposée consiste en un nouveau protocole multiaccès de réservation de ressources. MARSVP utilise le canal de données que l’on retrouve sur tous les accès et confine les échanges de messages entre l’usager et le premier nœud IP. Les besoins en QoS sont définis en fonction des QoS Class Indicators (QCIs) ce qui rend MARSVP simple à utiliser. Suite à une requête de réservation de ressources acceptée par le réseau, ce dernier configure l’accès et retourne au terminal les informations requises à l’envoi paquets (aux couches 2 et 3).----------ABSTRACT Fourth generation cellular networks trials have begun in the first half of 2010, notably in Sweden and Norway. As a first step, these networks only offer Internet access and rely on existing second and third generation networks for providing telephony and text messaging. It’s only after the deployment of the IP Multimedia Subsystem (IMS) that all services shall be supported on the new all-IP architecture. Fourth generation mobile networks should enable end users to benefit from data throughputs of at least 100 Mbps on the downlink, when the user is stationary, and of Quality of Service (QoS) support that allows guarantees on throughput, maximum delay, maximum jitter and on the packet loss rate. These networks will efficiently support applications that rely on geolocation in order to improve the user’s Quality of Experience (QoE). Today’s terminals can communicate using several radio technologies. Indeed, in addition to the cellular modem, terminals often support the Bluetooth technology which is used for connecting handsfree devices and headsets. Moreover, most cell phones feature a Wi-Fi interface that enables users to transfer huge volumes of data without congesting the cellular network. However, Wi-Fi connectivity is often restricted to the user’s home network or his workplace. Finally, a vertical handover is nearly always done manually and forces the terminal to change its IP address, which ultimately disrupts all active data sessions. A trend has emerged a few years ago among the mobile communications industry known as Fixed-Mobile Convergence (FMC). FMC is a trend aiming to provide Internet access and telephony on a single device capable of switching between local- and wide-area networks. At this time, very few operators (e.g., NTT Docomo) offer terminals capable of switching to another access automatically. However, the access point must belong to the user or be installed in his workplace. At the same time, another kind of convergence has begun in which the dedicated networks for public safety (such as police, fire prevention and ambulances) are being progressively migrated (because of their high operational costs) toward a single highly reliable and redundant network. Indeed, these services exhibit QoS requirements that are similar to residential costumers’ except they need a prioritized access, and that can terminate a non-priority user’s session during congestion situations. In addition to the public services that seek to reduce their operational costs by sharing commercial communications networks, the network operators have also entered a cost reduction phase. This situation is a result of the high degree of maturity that the mobile communications industry has reached. As an example, the branding or the coverage offered by each of them isn’t a sufficient sales argument anymore to enroll new subscribers. Operators must now distinguish themselves from their competition with a superior service offering. Some operators have already started to outsource their less profitable business activities in order to concentrate on their key functions. As a complement to this trend, operators have begun to share an ever increasing portion of their physical infrastructures with their competitors. As a first step, infrastructure sharing was limited to the base station sites and antenna masts. Later, the shelters were shared to further reduce the cooling and hosting costs of the equipments. Then, operators started to share radio equipments but each of them operated on different frequency bands. . . Infrastructure sharing beyond the first core network node isn’t actually supported in standardization. There is an additional trend into the mobile communications industry which is the specialization of the operators (i.e., the identification of target customers by the operators). As a result, these operators experience disjoint traffic peaks because their customer bases have different behaviors. The former have a strong incentive to share infrastructures because network dimensioning mostly depends on the peak demand. Consequently, sharing infrastructures increases the average traffic load without significantly increasing the peak load because the peaks occur at different times. This allows operators to boost their return on investment. Every existing Next Generation Network (NGN) architecture proposal features an all-IP core network, offers QoS to applications and a bandwidth on the downlink in the order of 100 Mbps. Moreover, these NGNs propose a number of Policy and Charging Control (PCC) mechanisms that determine how services are delivered to the subscribers and what charging method to apply. There are three main categories of policies: those that are related to the subscriber (e.g., gold/silver/bronze subscription, prepaid vs. billed access), those that apply to services (e.g., for a given service, bandwidth limitation, QoS class assignment, allocation and retention priority of resources) and finally policies that depend on the current state of the network (e.g., congestion level, traffic engineering, etc). In a first paper entitled “A Potential Evolution of the Policy and Charging Control/QoS Architecture for the 3GPP IETF-based Evolved Packet Core ”, FMC and Core Network (CN) sharing aspects are treated simultaneously because it is important that the logical PCC architecture reflects the realities of the industry trends described above. Following the description of the trends in the communications industry were presented a list of four requirements that enable for a PCC architecture: service convergence (capacity to use a service from any type of access), CN sharing that allows several Mobile Virtual Network Operators (MVNOs) to coexist, the creation of local access network policies as well as efficient micro-mobility in roaming scenarios. As a second step, two NGN architectures were evaluated upon the requirements mentioned above. This evaluation concluded that a hybrid solution (based on the key features of each architecture but without their respective drawbacks) would offer a very promising foundation for a complete solution. The proposed solution achieved its goal with a clearer separation of the business roles (e.g., access and network providers) and the introduction of a Network Policy Function (NPF) for the management of the CN. Indeed, the business roles that were defined allow the creation of distinct policy/QoS and administrative domains. The roles become mandatory in infrastructure sharing scenarios. Otherwise, they maintain the compatibility with the actual vertically-integrated operator model; the latter then plays all of the business roles. Introducing the NPF into the CN enables the CN policy management to be separated from policy management related to subscribers, services and access networks. Additionally, the NPF allows the CN to be shared by multiple Network Service Providers (NSPs) and respect the Service Level Agreements (SLAs) that link the IP Aggregation Network (IPAN) to the NSPs, as well as those that tie the IPAN to the Access Network Providers (ANPs). Another benefit of the NPF is that it can share a number of advanced functions between several NSPs. Those functions include audio/video transcoding, file caches (e.g., that can be used for multimedia content delivery), Deep Packet Inspection (DPI) antivirus, etc. The main advantage to integrate those infrastructure services at the IP transport level is to allow both IMS and non-IMS applications to benefit from them. A second paper entitled “A Network Policy Function Node for a Potential Evolution of the 3GPP Evolved Packet Core ” constitutes an extension of the first paper that extensively described the industry trends, two existing PCC architectures and their characteristics, and finally offered an overview of the proposed solution. On the other hand, the second paper thoroughly describes all of the impacts that the proposal has on the existing 3GPP PCC architecture. Indeed, a significant contribution of this second paper is that it provides an extensive list of potential simplifications that the proposed solution allows. The main contribution of the second paper is that from now on the proposed solution can be deployed over an existing PCC architecture with a minimum of impacts. Indeed, a small modification to the NPF’s reference points enables this enhancement. As a consequence, this enhancement provided a solution that is compatible with both PCC architecture variants, based on either GPRS Tunneling Protocol (GTP) or Proxy Mobile IPv6 (PMIPv6). A last contribution of the second paper is to demonstrate the NPF’s internals when the former is controlling a an IPAN based on tunneling mechanisms such as Multi-Protocol Label Switching (MPLS) or Provider Backbone Bridge-Traffic Engineering (PBB-TE). A traffic engineering process allows traffic flow aggregates to pass around a congested node, to better balance the load between the network elements and make sure that the QoS requirements are respected at all times. The third paper entitled “A MultiAccess Resource ReSerVation Protocol (MARSVP) for the 3GPP Evolved Packet System” deals with QoS provisioning in FMC scenarios, especially for applications that are not directly supported by the network. As an example, all peer-to-peer applications (such as online gaming) that represent a small fraction of the total peer-to-peer traffic or those that are new and relatively unknown. Second and third generation networks were designed such that the User Equipment (UE) would provide the network with the application’s QoS parameters. However, the number of possible combinations of QoS parameters was very large and too complex to manage. As a result, for the fourth generation of networks, an application server would provide the PCC architecture with the right QoS parameters. In addition, a limited number of QoS classes were defined which in the end greatly simplified QoS management. When FMC aspects are taken into account, it becomes trivial that the above mechanism only applies to 3GPP accesses. Indeed, each access type uses its own mechanisms that must often be controlled by the network instead of the user. Moreover, some accesses don’t feature a control channel on which QoS reservation requests would be carried. Also, existing QoS protocols are often too heavy to support and apply

    Routing optimization algorithms in integrated fronthaul/backhaul networks supporting multitenancy

    Get PDF
    Mención Internacional en el título de doctorEsta tesis pretende ayudar en la definición y el diseño de la quinta generación de redes de telecomunicaciones (5G) a través del modelado matemático de las diferentes cualidades que las caracterizan. En general, la ambición de estos modelos es realizar una optimización de las redes, ensalzando sus capacidades recientemente adquiridas para mejorar la eficiencia de los futuros despliegues tanto para los usuarios como para los operadores. El periodo de realización de esta tesis se corresponde con el periodo de investigación y definición de las redes 5G, y, por lo tanto, en paralelo y en el contexto de varios proyectos europeos del programa H2020. Por lo tanto, las diferentes partes del trabajo presentado en este documento cuadran y ofrecen una solución a diferentes retos que han ido apareciendo durante la definición del 5G y dentro del ámbito de estos proyectos, considerando los comentarios y problemas desde el punto de vista de todos los usuarios finales, operadores y proveedores. Así, el primer reto a considerar se centra en el núcleo de la red, en particular en cómo integrar tráfico fronthaul y backhaul en el mismo estrato de transporte. La solución propuesta es un marco de optimización para el enrutado y la colocación de recursos que ha sido desarrollado teniendo en cuenta restricciones de retardo, capacidad y caminos, maximizando el grado de despliegue de Unidades Distribuidas (DU) mientras se minimizan los agregados de las Unidades Centrales (CU) que las soportan. El marco y los algoritmos heurísticos desarrollados (para reducir la complexidad computacional) son validados y aplicados a redes tanto a pequeña como a gran (nivel de producción) escala. Esto los hace útiles para los operadores de redes tanto para la planificación de la red como para el ajuste dinámico de las operaciones de red en su infraestructura (virtualizada). Moviéndonos más cerca de los usuarios, el segundo reto considerado se centra en la colocación de servicios en entornos de nube y borde (cloud/edge). En particular, el problema considerado consiste en seleccionar la mejor localización para cada función de red virtual (VNF) que compone un servicio en entornos de robots en la nube, que implica restricciones estrictas en las cotas de retardo y fiabilidad. Los robots, vehículos y otros dispositivos finales proveen competencias significativas como impulsores, sensores y computación local que son esenciales para algunos servicios. Por contra, estos dispositivos están en continuo movimiento y pueden perder la conexión con la red o quedarse sin batería, cosa que reta aún más la entrega de servicios en este entorno dinámico. Así, el análisis realizado y la solución propuesta abordan las restricciones de movilidad y batería. Además, también se necesita tener en cuenta los aspectos temporales y los objetivos conflictivos de fiabilidad y baja latencia en el despliegue de servicios en una red volátil, donde los nodos de cómputo móviles actúan como una extensión de la infraestructura de cómputo de la nube y el borde. El problema se formula como un problema de optimización para colocación de VNFs minimizando el coste y también se propone un heurístico eficiente. Los algoritmos son evaluados de forma extensiva desde varios aspectos por simulación en escenarios que reflejan la realidad de forma detallada. Finalmente, el último reto analizado se centra en dar soporte a servicios basados en el borde, en particular, aprendizaje automático (ML) en escenarios del Internet de las Cosas (IoT) distribuidos. El enfoque tradicional al ML distribuido se centra en adaptar los algoritmos de aprendizaje a la red, por ejemplo, reduciendo las actualizaciones para frenar la sobrecarga. Las redes basadas en el borde inteligente, en cambio, hacen posible seguir un enfoque opuesto, es decir, definir la topología de red lógica alrededor de la tarea de aprendizaje a realizar, para así alcanzar el resultado de aprendizaje deseado. La solución propuesta incluye un modelo de sistema que captura dichos aspectos en el contexto de ML supervisado, teniendo en cuenta tanto nodos de aprendizaje (que realizan las computaciones) como nodos de información (que proveen datos). El problema se formula para seleccionar (i) qué nodos de aprendizaje e información deben cooperar para completar la tarea de aprendizaje, y (ii) el número de iteraciones a realizar, para minimizar el coste de aprendizaje mientras se garantizan los objetivos de error predictivo y tiempo de ejecución. La solución también incluye un algoritmo heurístico que es evaluado ensalzando una topología de red real y considerando tanto las tareas de clasificación como de regresión, y cuya solución se acerca mucho al óptimo, superando las soluciones alternativas encontradas en la literatura.This thesis aims to help in the definition and design of the 5th generation of telecommunications networks (5G) by modelling the different features that characterize them through several mathematical models. Overall, the aim of these models is to perform a wide optimization of the network elements, leveraging their newly-acquired capabilities in order to improve the efficiency of the future deployments both for the users and the operators. The timeline of this thesis corresponds to the timeline of the research and definition of 5G networks, and thus in parallel and in the context of several European H2020 programs. Hence, the different parts of the work presented in this document match and provide a solution to different challenges that have been appearing during the definition of 5G and within the scope of those projects, considering the feedback and problems from the point of view of all the end users, operators and providers. Thus, the first challenge to be considered focuses on the core network, in particular on how to integrate fronthaul and backhaul traffic over the same transport stratum. The solution proposed is an optimization framework for routing and resource placement that has been developed taking into account delay, capacity and path constraints, maximizing the degree of Distributed Unit (DU) deployment while minimizing the supporting Central Unit (CU) pools. The framework and the developed heuristics (to reduce the computational complexity) are validated and applied to both small and largescale (production-level) networks. They can be useful to network operators for both network planning as well as network operation adjusting their (virtualized) infrastructure dynamically. Moving closer to the user side, the second challenge considered focuses on the allocation of services in cloud/edge environments. In particular, the problem tackled consists of selecting the best the location of each Virtual Network Function (VNF) that compose a service in cloud robotics environments, that imply strict delay bounds and reliability constraints. Robots, vehicles and other end-devices provide significant capabilities such as actuators, sensors and local computation which are essential for some services. On the negative side, these devices are continuously on the move and might lose network connection or run out of battery, which further challenge service delivery in this dynamic environment. Thus, the performed analysis and proposed solution tackle the mobility and battery restrictions. We further need to account for the temporal aspects and conflicting goals of reliable, low latency service deployment over a volatile network, where mobile compute nodes act as an extension of the cloud and edge computing infrastructure. The problem is formulated as a cost-minimizing VNF placement optimization and an efficient heuristic is proposed. The algorithms are extensively evaluated from various aspects by simulation on detailed real-world scenarios. Finally, the last challenge analyzed focuses on supporting edge-based services, in particular, Machine Learning (ML) in distributed Internet of Things (IoT) scenarios. The traditional approach to distributed ML is to adapt learning algorithms to the network, e.g., reducing updates to curb overhead. Networks based on intelligent edge, instead, make it possible to follow the opposite approach, i.e., to define the logical network topology around the learning task to perform, so as to meet the desired learning performance. The proposed solution includes a system model that captures such aspects in the context of supervised ML, accounting for both learning nodes (that perform computations) and information nodes (that provide data). The problem is formulated to select (i) which learning and information nodes should cooperate to complete the learning task, and (ii) the number of iterations to perform, in order to minimize the learning cost while meeting the target prediction error and execution time. The solution also includes an heuristic algorithm that is evaluated leveraging a real-world network topology and considering both classification and regression tasks, and closely matches the optimum, outperforming state-of-the-art alternatives.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Pablo Serrano Yáñez-Mingot.- Secretario: Andrés García Saavedra.- Vocal: Luca Valcarengh

    Junal INKOM Vol 9 No 2, 2015

    Get PDF
    • …
    corecore