1,355 research outputs found
Edge Offloading in Smart Grid
The energy transition supports the shift towards more sustainable energy
alternatives, paving towards decentralized smart grids, where the energy is
generated closer to the point of use. The decentralized smart grids foresee
novel data-driven low latency applications for improving resilience and
responsiveness, such as peer-to-peer energy trading, microgrid control, fault
detection, or demand response. However, the traditional cloud-based smart grid
architectures are unable to meet the requirements of the new emerging
applications such as low latency and high-reliability thus alternative
architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge
offloading can play a pivotal role for the next-generation smart grid AI
applications because it enables the efficient utilization of computing
resources and addresses the challenges of increasing data generated by IoT
devices, optimizing the response time, energy consumption, and network
performance. However, a comprehensive overview of the current state of research
is needed to support sound decisions regarding energy-related applications
offloading from cloud to fog or edge, focusing on smart grid open challenges
and potential impacts. In this paper, we delve into smart grid and
computational distribution architec-tures, including edge-fog-cloud models,
orchestration architecture, and serverless computing, and analyze the
decision-making variables and optimization algorithms to assess the efficiency
of edge offloading. Finally, the work contributes to a comprehensive
understanding of the edge offloading in smart grid, providing a SWOT analysis
to support decision making.Comment: to be submitted to journa
Multi-Layer Latency Aware Workload Assignment of E-Transport IoT Applications in Mobile Sensors Cloudlet Cloud Networks
These days, with the emerging developments in wireless communication technologies, such as 6G and 5G and the Internet of Things (IoT) sensors, the usage of E-Transport applications has been increasing progressively. These applications are E-Bus, E-Taxi, self-autonomous car, ETrain and E-Ambulance, and latency-sensitive workloads executed in the distributed cloud network. Nonetheless, many delays present in cloudlet-based cloud networks, such as communication delay, round-trip delay and migration during the workload in the cloudlet-based cloud network. However, the distributed execution of workloads at different computing nodes during the assignment is a challenging task. This paper proposes a novel Multi-layer Latency (e.g., communication delay, roundtrip delay and migration delay) Aware Workload Assignment Strategy (MLAWAS) to allocate the workload of E-Transport applications into optimal computing nodes. MLAWAS consists of different components, such as the Q-Learning aware assignment and the Iterative method, which distribute workload in a dynamic environment where runtime changes of overloading and overheating remain controlled. The migration of workload and VM migration are also part of MLAWAS. The goal is to minimize the average response time of applications. Simulation results demonstrate that MLAWAS earns the minimum average response time as compared with the two other existing strategies.publishedVersio
An SOA-Based Framework of Computational Offloading for Mobile Cloud Computing
Mobile Computing is a technology that allows transmission of audio, video, and other types of data via a computer or any other wireless-enabled device without having to be connected to a fixed physical link. Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, connection instability, and limited computational power. In particular, the advent of connecting mobile devices to the internet offers the possibility of offloading computation and data intensive tasks from mobile devices to remote cloud servers for efficient execution. This proposed thesis develops an algorithm that uses an objective function to adaptively decide strategies for computational offloading according to changing context information. By following the style of Service-Oriented Architecture (SOA), the proposed framework brings cloud computing to mobile devices for mobile applications to benefit from remote execution of tasks in the cloud. This research discusses the algorithm and framework, along with the results of the experiments with a newly developed system for self-driving vehicles and points out the anticipated advantages of Adaptive Computational Offloading
Mobile cloud computing for computation offloading: Issues and challenges
International audienceDespite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC) integrates mobile computing and Cloud Computing (CC) in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs) such as limited battery lifetime, limited processing capabilities , and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition , it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research
A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems
Producción CientíficaIn recent years, the number of embedded computing devices connected to the Internet has exponentially increased. At the same time, new applications are becoming more complex and computationally demanding, which can be a problem for devices, especially when they are battery powered. In this context, the concepts of computation offloading and edge computing, which allow applications to be fully or partially offloaded and executed on servers close to the devices in the network, have arisen and received increasing attention. Then, the design of algorithms to make the decision of which applications or tasks should be offloaded, and where to execute them, is crucial. One of the options that has been gaining momentum lately is the use of Reinforcement Learning (RL) and, in particular, Deep Reinforcement Learning (DRL), which enables learning optimal or near-optimal offloading policies adapted to each particular scenario. Although the use of RL techniques to solve the computation offloading problem in edge systems has been covered by some surveys, it has been done in a limited way. For example, some surveys have analysed the use of RL to solve various networking problems, with computation offloading being one of them, but not the primary focus. Other surveys, on the other hand, have reviewed techniques to solve the computation offloading problem, being RL just one of the approaches considered. To the best of our knowledge, this is the first survey that specifically focuses on the use of RL and DRL techniques for computation offloading in edge computing system. We present a comprehensive and detailed survey, where we analyse and classify the research papers in terms of use cases, network and edge computing architectures, objectives, RL algorithms, decision-making approaches, and time-varying characteristics considered in the analysed scenarios. In particular, we include a series of tables to help researchers identify relevant papers based on specific features, and analyse which scenarios and techniques are most frequently considered in the literature. Finally, this survey identifies a number of research challenges, future directions and areas for further study.Consejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)Ministerio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42, PID2021-124463OBI00 y RED2018-102585-T, financiados por MCIN/AEI/10.13039/501100011033
Emerging Edge Computing Technologies for Distributed Internet of Things (IoT) Systems
The ever-increasing growth in the number of connected smart devices and
various Internet of Things (IoT) verticals is leading to a crucial challenge of
handling massive amount of raw data generated from distributed IoT systems and
providing real-time feedback to the end-users. Although existing
cloud-computing paradigm has an enormous amount of virtual computing power and
storage capacity, it is not suitable for latency-sensitive applications and
distributed systems due to the involved latency and its centralized mode of
operation. To this end, edge/fog computing has recently emerged as the next
generation of computing systems for extending cloud-computing functions to the
edges of the network. Despite several benefits of edge computing such as
geo-distribution, mobility support and location awareness, various
communication and computing related challenges need to be addressed in
realizing edge computing technologies for future IoT systems. In this regard,
this paper provides a holistic view on the current issues and effective
solutions by classifying the emerging technologies in regard to the joint
coordination of radio and computing resources, system optimization and
intelligent resource management. Furthermore, an optimization framework for
edge-IoT systems is proposed to enhance various performance metrics such as
throughput, delay, resource utilization and energy consumption. Finally, a
Machine Learning (ML) based case study is presented along with some numerical
results to illustrate the significance of edge computing.Comment: 16 pages, 4 figures, 2 tables, submitted to IEEE Wireless
Communications Magazin
Towards Autonomous Computer Networks in Support of Critical Systems
L'abstract è presente nell'allegato / the abstract is in the attachmen
- …