204 research outputs found

    Cooperative scheduling and load balancing techniques in fog and edge computing

    Get PDF
    Fog and Edge Computing are two models that reached maturity in the last decade. Today, they are two solid concepts and plenty of literature tried to develop them. Also corroborated by the development of technologies, like for example 5G, they can now be considered de facto standards when building low and ultra-low latency applications, privacy-oriented solutions, industry 4.0 and smart city infrastructures. The common trait of Fog and Edge computing environments regards their inherent distributed and heterogeneous nature where the multiple (Fog or Edge) nodes are able to interact with each other with the essential purpose of pre-processing data gathered by the uncountable number of sensors to which they are connected to, even by running significant ML models and relying upon specific processors (TPU). However, nodes are often placed in a geographic domain, like a smart city, and the dynamic of the traffic during the day may cause some nodes to be overwhelmed by requests while others instead may become completely idle. To achieve the optimal usage of the system and also to guarantee the best possible QoS across all the users connected to the Fog or Edge nodes, the need to design load balancing and scheduling algorithms arises. In particular, a reasonable solution is to enable nodes to cooperate. This capability represents the main objective of this thesis, which is the design of fully distributed algorithms and solutions whose purpose is the one of balancing the load across all the nodes, also by following, if possible, QoS requirements in terms of latency or imposing constraints in terms of power consumption when the nodes are powered by green energy sources. Unfortunately, when a central orchestrator is missing, a crucial element which makes the design of such algorithms difficult is that nodes need to know the state of the others in order to make the best possible scheduling decision. However, it is not possible to retrieve the state without introducing further latency during the service of the request. Furthermore, the retrieved information about the state is always old, and as a consequence, the decision is always relying on imprecise data. In this thesis, the problem is circumvented in two main ways. The first one considers randomised algorithms which avoid probing all of the neighbour nodes in favour of at maximum two nodes picked at random. This is proven to bring an exponential improvement in performance with respect to the probe of a single node. The second approach, instead, considers Reinforcement Learning as a technique for inferring the state of the other nodes thanks to the reward received by the agents when requests are forwarded. Moreover, the thesis will also focus on the energy aspect of the Edge devices. In particular, will be analysed a scenario of Green Edge Computing, where devices are powered only by Photovoltaic Panels and a scenario of mobile offloading targeting ML image inference applications. Lastly, a final glance will be given at a series of infrastructural studies, which will give the foundations for implementing the proposed algorithms on real devices, in particular, Single Board Computers (SBCs). There will be presented a structural scheme of a testbed of Raspberry Pi boards, and a fully-fledged framework called ``P2PFaaS'' which allows the implementation of load balancing and scheduling algorithms based on the Function-as-a-Service (FaaS) paradigm

    Enabling 5G Edge Native Applications

    Get PDF

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies

    Edge Computing for Internet of Things

    Get PDF
    The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond

    Robust resource management for time-critical tasks in the cloud-edge continuum

    Get PDF
    As an emerging distributed computing paradigm, the Cloud-edge continuum (CEC) leverages the strengths of both cloud computing and edge computing to provide efficient and effective services to end-users. CEC enables faster processing of data and provides multiple benefits, including scalability, data security, and improved quality of service. With the increasing demand for real-time data processing, the proliferation of the Internet of Things (IoT) devices, and the growing need for data privacy and security, CEC has been developing, evolving, and adapting quickly. Cloud computing provides scalable and flexible computing infrastructure, while edge computing offers low latency and location-awareness capabilities. How to schedule the tasks in a CEC among its exploding amount of resources is a challenge for both service providers and users. QoS (quality of service) or QoE (Quality of experience) are metrics that describe this process and are often adopted as the optimization objective. Among all kinds of resource management optimization approaches, learning-based task scheduling and offloading have gained popularity in recent years. To overcome these limitations, researchers have turned to machine learning techniques to develop more intelligent and adaptive resource management algorithms. However, the machine learning-based methods in CEC also face several challenges: 1. The performance of learning-based resource management is difficult to maintain when the pattern of time-critical tasks is dynamically changing;2. Learning-based resource management strategies are difficult to adapt when continuum resources are highly heterogeneous;3. Learning-based resource management suffers from low robustness when optimizing multiple objectives.My thesis tackles these challenges, and we propose a Meta-Learning-based resource management framework to deal with time-critical requests spanning from independent tasks to complex workflows in a dynamic cloud-edge continuum. Our goal is to improve the robustness and adaptivity of the resource management framework in highly changing environments

    Robust resource management for time-critical tasks in the cloud-edge continuum

    Get PDF
    As an emerging distributed computing paradigm, the Cloud-edge continuum (CEC) leverages the strengths of both cloud computing and edge computing to provide efficient and effective services to end-users. CEC enables faster processing of data and provides multiple benefits, including scalability, data security, and improved quality of service. With the increasing demand for real-time data processing, the proliferation of the Internet of Things (IoT) devices, and the growing need for data privacy and security, CEC has been developing, evolving, and adapting quickly. Cloud computing provides scalable and flexible computing infrastructure, while edge computing offers low latency and location-awareness capabilities. How to schedule the tasks in a CEC among its exploding amount of resources is a challenge for both service providers and users. QoS (quality of service) or QoE (Quality of experience) are metrics that describe this process and are often adopted as the optimization objective. Among all kinds of resource management optimization approaches, learning-based task scheduling and offloading have gained popularity in recent years. To overcome these limitations, researchers have turned to machine learning techniques to develop more intelligent and adaptive resource management algorithms. However, the machine learning-based methods in CEC also face several challenges: 1. The performance of learning-based resource management is difficult to maintain when the pattern of time-critical tasks is dynamically changing;2. Learning-based resource management strategies are difficult to adapt when continuum resources are highly heterogeneous;3. Learning-based resource management suffers from low robustness when optimizing multiple objectives.My thesis tackles these challenges, and we propose a Meta-Learning-based resource management framework to deal with time-critical requests spanning from independent tasks to complex workflows in a dynamic cloud-edge continuum. Our goal is to improve the robustness and adaptivity of the resource management framework in highly changing environments

    System Abstractions for Scalable Application Development at the Edge

    Get PDF
    Recent years have witnessed an explosive growth of Internet of Things (IoT) devices, which collect or generate huge amounts of data. Given diverse device capabilities and application requirements, data processing takes place across a range of settings, from on-device to a nearby edge server/cloud and remote cloud. Consequently, edge-cloud coordination has been studied extensively from the perspectives of job placement, scheduling and joint optimization. Typical approaches focus on performance optimization for individual applications. This often requires domain knowledge of the applications, but also leads to application-specific solutions. Application development and deployment over diverse scenarios thus incur repetitive manual efforts. There are two overarching challenges to provide system-level support for application development at the edge. First, there is inherent heterogeneity at the device hardware level. The execution settings may range from a small cluster as an edge cloud to on-device inference on embedded devices, differing in hardware capability and programming environments. Further, application performance requirements vary significantly, making it even more difficult to map different applications to already heterogeneous hardware. Second, there are trends towards incorporating edge and cloud and multi-modal data. Together, these add further dimensions to the design space and increase the complexity significantly. In this thesis, we propose a novel framework to simplify application development and deployment over a continuum of edge to cloud. Our framework provides key connections between different dimensions of design considerations, corresponding to the application abstraction, data abstraction and resource management abstraction respectively. First, our framework masks hardware heterogeneity with abstract resource types through containerization, and abstracts away the application processing pipelines into generic flow graphs. Further, our framework further supports a notion of degradable computing for application scenarios at the edge that are driven by multimodal sensory input. Next, as video analytics is the killer app of edge computing, we include a generic data management service between video query systems and a video store to organize video data at the edge. We propose a video data unit abstraction based on a notion of distance between objects in the video, quantifying the semantic similarity among video data. Last, considering concurrent application execution, our framework supports multi-application offloading with device-centric control, with a userspace scheduler service that wraps over the operating system scheduler
    • …
    corecore