608 research outputs found
Edge Offloading in Smart Grid
The energy transition supports the shift towards more sustainable energy
alternatives, paving towards decentralized smart grids, where the energy is
generated closer to the point of use. The decentralized smart grids foresee
novel data-driven low latency applications for improving resilience and
responsiveness, such as peer-to-peer energy trading, microgrid control, fault
detection, or demand response. However, the traditional cloud-based smart grid
architectures are unable to meet the requirements of the new emerging
applications such as low latency and high-reliability thus alternative
architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge
offloading can play a pivotal role for the next-generation smart grid AI
applications because it enables the efficient utilization of computing
resources and addresses the challenges of increasing data generated by IoT
devices, optimizing the response time, energy consumption, and network
performance. However, a comprehensive overview of the current state of research
is needed to support sound decisions regarding energy-related applications
offloading from cloud to fog or edge, focusing on smart grid open challenges
and potential impacts. In this paper, we delve into smart grid and
computational distribution architec-tures, including edge-fog-cloud models,
orchestration architecture, and serverless computing, and analyze the
decision-making variables and optimization algorithms to assess the efficiency
of edge offloading. Finally, the work contributes to a comprehensive
understanding of the edge offloading in smart grid, providing a SWOT analysis
to support decision making.Comment: to be submitted to journa
Continuous QoS-compliant Orchestration in the Cloud-Edge Continuum
The problem of managing multi-service applications on top of Cloud-Edge
networks in a QoS-aware manner has been thoroughly studied in recent years from
a decision-making perspective. However, only a few studies addressed the
problem of actively enforcing such decisions while orchestrating multi-service
applications and considering infrastructure and application variations. In this
article, we propose a next-gen orchestrator prototype based on Docker to
achieve the continuous and QoS-compliant management of multiservice
applications on top of geographically distributed Cloud-Edge resources, in
continuity with CI/CD pipelines and infrastructure monitoring tools. Finally,
we assess our proposal over a geographically distributed testbed across Italy.Comment: 25 pages, 8 figure
Matching-Based Virtual Network Function Embedding for SDN-Enabled Power Distribution IoT
The power distribution internet of things (PD-IoT) has the complex network architecture, various emerging services, and the enormous number of terminal devices, which poses rigid requirements on substrate network infrastructure. However, the traditional PD-IoT has the characteristics of single network function, management and maintenance difficulties, and poor service flexibility, which makes it hard to meet the differentiated quality of service (QoS) requirements of different services. In this paper, we propose the software-defined networking (SDN)-enabled PD-IoT framework to improve network compatibility and flexibility, and investigate the virtual network function (VNF) embedding problem of service orchestration in PD-IoT. To solve the preference conflicts among different VNFs towards the network function node (NFV) and provide differentiated service for services in various priorities, a matching-based priority-aware VNF embedding (MPVE) algorithm is proposed to reduce energy consumption while minimizing the total task processing delay. Simulation results demonstrate that MPVE significantly outperforms existing matching algorithm and random matching algorithm in terms of delay and energy consumption while ensuring the task processing requirements of high-priority services
Deep Reinforcement Learning for Resource Management in Network Slicing
Network slicing is born as an emerging business to operators, by allowing
them to sell the customized slices to various tenants at different prices. In
order to provide better-performing and cost-efficient services, network slicing
involves challenging technical issues and urgently looks forward to intelligent
innovations to make the resource management consistent with users' activities
per slice. In that regard, deep reinforcement learning (DRL), which focuses on
how to interact with the environment by trying alternative actions and
reinforcing the tendency actions producing more rewarding consequences, is
assumed to be a promising solution. In this paper, after briefly reviewing the
fundamental concepts of DRL, we investigate the application of DRL in solving
some typical resource management for network slicing scenarios, which include
radio resource slicing and priority-based core network slicing, and demonstrate
the advantage of DRL over several competing schemes through extensive
simulations. Finally, we also discuss the possible challenges to apply DRL in
network slicing from a general perspective.Comment: The manuscript has been accepted by IEEE Access in Nov. 201
COSCO: container orchestration using co-simulation and gradient based optimization for fog computing environments
Intelligent task placement and management of tasks in large-scale fog platforms is challenging due to the highly volatile nature of modern workload applications and sensitive user requirements of low energy consumption and response time. Container orchestration platforms have emerged to alleviate this problem with prior art either using heuristics to quickly reach scheduling decisions or AI driven methods like reinforcement learning and evolutionary approaches to adapt to dynamic scenarios. The former often fail to quickly adapt in highly dynamic environments, whereas the latter have run-times that are slow enough to negatively impact response time. Therefore, there is a need for scheduling policies that are both reactive to work efficiently in volatile environments and have low scheduling overheads. To achieve this, we propose a Gradient Based Optimization Strategy using Back-propagation of gradients with respect to Input (GOBI). Further, we leverage the accuracy of predictive digital-twin models and simulation capabilities by developing a Coupled Simulation and Container Orchestration Framework (COSCO). Using this, we create a hybrid simulation driven decision approach, GOBI*, to optimize Quality of Service (QoS) parameters. Co-simulation and the back-propagation approaches allow these methods to adapt quickly in volatile environments. Experiments conducted using real-world data on fog applications using the GOBI and GOBI* methods, show a significant improvement in terms of energy consumption, response time, Service Level Objective and scheduling time by up to 15, 40, 4, and 82 percent respectively when compared to the state-of-the-art algorithms
- …