9 research outputs found

    A reference architecture for cloud-edge meta-operating systems enabling cross-domain, data-intensive, ML-assisted applications: architectural overview and key concepts

    Get PDF
    Future data-intensive intelligent applications are required to traverse across the cloudto-edge-to-IoT continuum, where cloud and edge resources elegantly coordinate, alongside sensor networks and data. However, current technical solutions can only partially handle the data outburst associated with the IoT proliferation experienced in recent years, mainly due to their hierarchical architectures. In this context, this paper presents a reference architecture of a meta-operating system (RAMOS), targeted to enable a dynamic, distributed and trusted continuum which will be capable of facilitating the next-generation smart applications at the edge. RAMOS is domain-agnostic, capable of supporting heterogeneous devices in various network environments. Furthermore, the proposed architecture possesses the ability to place the data at the origin in a secure and trusted manner. Based on a layered structure, the building blocks of RAMOS are thoroughly described, and the interconnection and coordination between them is fully presented. Furthermore, illustration of how the proposed reference architecture and its characteristics could fit in potential key industrial and societal applications, which in the future will require more power at the edge, is provided in five practical scenarios, focusing on the distributed intelligence and privacy preservation principles promoted by RAMOS, as well as the concept of environmental footprint minimization. Finally, the business potential of an open edge ecosystem and the societal impacts of climate net neutrality are also illustrated.For UPC authors: this research was funded by the Spanish Ministry of Science, Innovation and Universities and FEDER, grant number PID2021-124463OB-100.Peer ReviewedPostprint (published version

    The troubled journey of QoS: From ATM to content networking, edge-computing and distributed internet governance

    Get PDF
    Network Quality of Service (QoS) and the associated user Quality of Experience (QoE) have always been the networking “holy grail” and have been sought after through various different approaches and networking technologies over the last decades. Despite substantial amounts of effort invested in the area, there has been very little actual deployment of mechanisms to guarantee QoS in the Internet. As a result, the Internet is largely operating on a “best effort” basis in terms of QoS. Here, we attempt a historical overview in order to better understand how we got to the point where we are today and consider the evolution of QoS/QoE in the future. As we move towards more demanding networking environments where enormous amounts of data is produced at the edge of the network (e.g., from IoT devices), computation will also need to migrate to the edge in order to guarantee QoS. In turn, we argue that distributed computing at the edge of the network will inevitably require infrastructure decentralisation. That said, trust to the infrastructure provider is more difficult to guarantee and new components need to be incorporated into the Internet landscape in order to be able to support emerging applications, but also achieve acceptable service quality. We start from the first steps of ATM and related IP-based technologies, we consider recent proposals for content-oriented and Information-Centric Networking, mobile edge and fog computing, and finally we see how distributed Internet governance through Distributed Ledger Technology and blockchains can influence QoS in future networks

    Opportunistic Deployment of Distributed Edge Clouds for Latency-critical Applications

    Get PDF
    The growing number of latency-critical applications are posing novel challenges for network operators, cloud/hosting companies, and application providers. Edge Computing is the strongest candidate for providing low-latency responses, but it is not yet clear what edge infrastructures will be like. This paper introduces a new platform for enabling an edge infrastructure according to a disaggregated distributed cloud architecture and an opportunistic model based on bare-metal providers. Results from a multi-server online gaming application deployed in a real geo-distributed edge infrastructure show the feasibility, performance and cost efficiency of the solution

    Resource Provisioning and Allocation in Function-as-a-Service Edge-Clouds

    Get PDF
    Edge computing has emerged as a new paradigm to bring cloud applications closer to users for increased performance. Unlike back-end cloud systems which consolidate their resources in a centralized data center location with virtually unlimited capacity, edge-clouds comprise distributed resources at various computation spots, each with very limited capacity. In this paper, we consider Function-as-a-Service (FaaS) edge-clouds where application providers deploy their latency-critical functions that process user requests with strict response time deadlines. In this setting, we investigate the problem of resource provisioning and allocation. After formulating the optimal solution, we propose resource allocation and provisioning algorithms across the spectrum of fully-centralized to fully-decentralized. We evaluate the performance of these algorithms in terms of their ability to utilize CPU resources and meet request deadlines under various system parameters. Our results indicate that practical decentralized strategies, which require no coordination among computation spots, achieve performance that is close to the optimal fully-centralized strategy with coordination overheads

    Network Function Virtualization Service Delivery In Future Internet

    Get PDF
    This dissertation investigates the Network Function Virtualization (NFV) service delivery problems in the future Internet. With the emerging Internet of everything, 5G communication and multi-access edge computing techniques, tremendous end-user devices are connected to the Internet. The massive quantity of end-user devices facilitates various services between the end-user devices and the cloud/edge servers. To improve the service quality and agility, NFV is applied. In NFV, the customer\u27s data from these services will go through multiple Service Functions (SFs) for processing or analysis. Unlike traditional point-to-point data transmission, a particular set of SFs and customized service requirements are needed to be applied to the customer\u27s traffic flow, which makes the traditional point-to-point data transmission methods not directly used. As the traditional point-to-point data transmission methods cannot be directly applied, there should be a body of novel mechanisms that effectively deliver the NFV services with customized~requirements. As a result, this dissertation proposes a series of mechanisms for delivering NFV services with diverse requirements. First, we study how to deliver the traditional NFV service with a provable boundary in unique function networks. Secondly, considering both forward and backward traffic, we investigate how to effectively deliver the NFV service when the SFs required in forward and backward traffic is not the same. Thirdly, we investigate how to efficiently deliver the NFV service when the required SFs have specific executing order constraints. We also provide detailed analysis and discussion for proposed mechanisms and validate their performance via extensive simulations. The results demonstrate that the proposed mechanisms can efficiently and effectively deliver the NFV services under different requirements and networking conditions. At last, we also propose two future research topics for further investigation. The first topic focuses on parallelism-aware service function chaining and embedding. The second topic investigates the survivability of NFV services

    Cooperative scheduling and load balancing techniques in fog and edge computing

    Get PDF
    Fog and Edge Computing are two models that reached maturity in the last decade. Today, they are two solid concepts and plenty of literature tried to develop them. Also corroborated by the development of technologies, like for example 5G, they can now be considered de facto standards when building low and ultra-low latency applications, privacy-oriented solutions, industry 4.0 and smart city infrastructures. The common trait of Fog and Edge computing environments regards their inherent distributed and heterogeneous nature where the multiple (Fog or Edge) nodes are able to interact with each other with the essential purpose of pre-processing data gathered by the uncountable number of sensors to which they are connected to, even by running significant ML models and relying upon specific processors (TPU). However, nodes are often placed in a geographic domain, like a smart city, and the dynamic of the traffic during the day may cause some nodes to be overwhelmed by requests while others instead may become completely idle. To achieve the optimal usage of the system and also to guarantee the best possible QoS across all the users connected to the Fog or Edge nodes, the need to design load balancing and scheduling algorithms arises. In particular, a reasonable solution is to enable nodes to cooperate. This capability represents the main objective of this thesis, which is the design of fully distributed algorithms and solutions whose purpose is the one of balancing the load across all the nodes, also by following, if possible, QoS requirements in terms of latency or imposing constraints in terms of power consumption when the nodes are powered by green energy sources. Unfortunately, when a central orchestrator is missing, a crucial element which makes the design of such algorithms difficult is that nodes need to know the state of the others in order to make the best possible scheduling decision. However, it is not possible to retrieve the state without introducing further latency during the service of the request. Furthermore, the retrieved information about the state is always old, and as a consequence, the decision is always relying on imprecise data. In this thesis, the problem is circumvented in two main ways. The first one considers randomised algorithms which avoid probing all of the neighbour nodes in favour of at maximum two nodes picked at random. This is proven to bring an exponential improvement in performance with respect to the probe of a single node. The second approach, instead, considers Reinforcement Learning as a technique for inferring the state of the other nodes thanks to the reward received by the agents when requests are forwarded. Moreover, the thesis will also focus on the energy aspect of the Edge devices. In particular, will be analysed a scenario of Green Edge Computing, where devices are powered only by Photovoltaic Panels and a scenario of mobile offloading targeting ML image inference applications. Lastly, a final glance will be given at a series of infrastructural studies, which will give the foundations for implementing the proposed algorithms on real devices, in particular, Single Board Computers (SBCs). There will be presented a structural scheme of a testbed of Raspberry Pi boards, and a fully-fledged framework called ``P2PFaaS'' which allows the implementation of load balancing and scheduling algorithms based on the Function-as-a-Service (FaaS) paradigm

    Improved planning and resource management in next generation green mobile communication networks

    Get PDF
    In upcoming years, mobile communication networks will experience a disruptive reinventing process through the deployment of post 5th Generation (5G) mobile networks. Profound impacts are expected on network planning processes, maintenance and operations, on mobile services, subscribers with major changes in their data consumption and generation behaviours, as well as on devices itself, with a myriad of different equipment communicating over such networks. Post 5G will be characterized by a profound transformation of several aspects: processes, technology, economic, social, but also environmental aspects, with energy efficiency and carbon neutrality playing an important role. It will represent a network of networks: where different types of access networks will coexist, an increasing diversity of devices of different nature, massive cloud computing utilization and subscribers with unprecedented data-consuming behaviours. All at greater throughput and quality of service, as unseen in previous generations. The present research work uses 5G new radio (NR) latest release as baseline for developing the research activities, with future networks post 5G NR in focus. Two approaches were followed: i) method re-engineering, to propose new mechanisms and overcome existing or predictably existing limitations and ii) concept design and innovation, to propose and present innovative methods or mechanisms to enhance and improve the design, planning, operation, maintenance and optimization of 5G networks. Four main research areas were addressed, focusing on optimization and enhancement of 5G NR future networks, the usage of edge virtualized functions, subscriber’s behavior towards the generation of data and a carbon sequestering model aiming to achieve carbon neutrality. Several contributions have been made and demonstrated, either through models of methodologies that will, on each of the research areas, provide significant improvements and enhancements from the planning phase to the operational phase, always focusing on optimizing resource management. All the contributions are retro compatible with 5G NR and can also be applied to what starts being foreseen as future mobile networks. From the subscriber’s perspective and the ultimate goal of providing the best quality of experience possible, still considering the mobile network operator’s (MNO) perspective, the different proposed or developed approaches resulted in optimization methods for the numerous problems identified throughout the work. Overall, all of such contributed individually but aggregately as a whole to improve and enhance globally future mobile networks. Therefore, an answer to the main question was provided: how to further optimize a next-generation network - developed with optimization in mind - making it even more efficient while, simultaneously, becoming neutral concerning carbon emissions. The developed model for MNOs which aimed to achieve carbon neutrality through CO2 sequestration together with the subscriber’s behaviour model - topics still not deeply focused nowadays – are two of the main contributions of this thesis and of utmost importance for post-5G networks.Nos próximos anos espera-se que as redes de comunicações móveis se reinventem para lá da 5ª Geração (5G), com impactos profundos ao nível da forma como são planeadas, mantidas e operacionalizadas, ao nível do comportamento dos subscritores de serviços móveis, e através de uma miríade de dispositivos a comunicar através das mesmas. Estas redes serão profundamente transformadoras em termos tecnológicos, económicos, sociais, mas também ambientais, sendo a eficiência energética e a neutralidade carbónica aspetos que sofrem uma profunda melhoria. Paradoxalmente, numa rede em que coexistirão diferentes tipos de redes de acesso, mais dispositivos, utilização massiva de sistema de computação em nuvem, e subscritores com comportamentos de consumo de serviços inéditos nas gerações anteriores. O trabalho desenvolvido utiliza como base a release mais recente das redes 5G NR (New Radio), sendo o principal focus as redes pós-5G. Foi adotada uma abordagem de "reengenharia de métodos” (com o objetivo de propor mecanismos para resolver limitações existentes ou previsíveis) e de “inovação e design de conceitos”, em que são apresentadas técnicas e metodologias inovadoras, com o principal objetivo de contribuir para um desenho e operação otimizadas desta geração de redes celulares. Quatro grandes áreas de investigação foram endereçadas, contribuindo individualmente para um todo: melhorias e otimização generalizada de redes pós-5G, a utilização de virtualização de funções de rede, a análise comportamental dos subscritores no respeitante à geração e consumo de tráfego e finalmente, um modelo de sequestro de carbono com o objetivo de compensar as emissões produzidas por esse tipo de redes que se prevê ser massiva, almejando atingir a neutralidade carbónica. Como resultado deste trabalho, foram feitas e demonstradas várias contribuições, através de modelos ou metodologias, representando em cada área de investigação melhorias e otimizações, que, todas contribuindo para o mesmo objetivo, tiveram em consideração a retro compatibilidade e aplicabilidade ao que se prevê que sejam as futuras redes pós 5G. Focando sempre na perspetiva do subscritor da melhor experiência possível, mas também no lado do operador de serviço móvel – que pretende otimizar as suas redes, reduzir custos e maximizar o nível de qualidade de serviço prestado - as diferentes abordagens que foram desenvolvidas ou propostas, tiveram como resultado a resolução ou otimização dos diferentes problemas identificados, contribuindo de forma agregada para a melhoria do sistema no seu todo, respondendo à questão principal de como otimizar ainda mais uma rede desenvolvida para ser extremamente eficiente, tornando-a, simultaneamente, neutra em termos de emissões de carbono. Das principais contribuições deste trabalho relevam-se precisamente o modelo de compensação das emissões de CO2, com vista à neutralidade carbónica e um modelo de análise comportamental dos subscritores, dois temas ainda pouco explorados e extremamente importantes em contexto de redes futuras pós-5G

    Conflict-Free Replicated Data Types in Dynamic Environments

    Get PDF
    Over the years, mobile devices have become increasingly popular and gained improved computation capabilities allowing them to perform more complex tasks such as collaborative applications. Given the weak characteristic properties of mobile networks, which represent highly dynamic environments where users may experience regular involuntary disconnection periods, the big question arises of how to maintain data consistency. This issue is most pronounced in collaborative environments where multiple users interact with each other, sharing a replicated state that may diverge due to concurrency conflicts and loss of updates. To maintain consistency, one of today’s best solutions is Conflict-Free Replicated Data Types (CRDTs), which ensure low latency values and automatic conflict resolution, guaranteeing eventual consistency of the shared data. However, a limitation often found on CRDTs and the systems that employ them is the need for the knowledge of the replicas whom the state changes must be disseminated to. This constitutes a problem since it is inconceivable to maintain said knowledge in an environment where clients may leave and join at any given time and consequently get disconnected due to mobile network communications unreliability. In this thesis, we present the study and extension of the CRDT concept to dynamic environments by introducing the developed P/S-CRDTs model, where CRDTs are coupled with the publisher/subscriber interaction scheme and additional mechanisms to ensure users are able to cooperate and maintain consistency whilst accounting for the consequent volatile behaviors of mobile networks. The experimental results show that in volatile scenarios of disconnection, mobile users in collaborative activity maintain consistency among themselves and when compared to other available CRDT models, the P/S-CRDTs model is able to decouple the required knowledge of whom the updates must be disseminated to, while ensuring appropriate network traffic values
    corecore