588 research outputs found

    Edge Offloading in Smart Grid

    Full text link
    The energy transition supports the shift towards more sustainable energy alternatives, paving towards decentralized smart grids, where the energy is generated closer to the point of use. The decentralized smart grids foresee novel data-driven low latency applications for improving resilience and responsiveness, such as peer-to-peer energy trading, microgrid control, fault detection, or demand response. However, the traditional cloud-based smart grid architectures are unable to meet the requirements of the new emerging applications such as low latency and high-reliability thus alternative architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge offloading can play a pivotal role for the next-generation smart grid AI applications because it enables the efficient utilization of computing resources and addresses the challenges of increasing data generated by IoT devices, optimizing the response time, energy consumption, and network performance. However, a comprehensive overview of the current state of research is needed to support sound decisions regarding energy-related applications offloading from cloud to fog or edge, focusing on smart grid open challenges and potential impacts. In this paper, we delve into smart grid and computational distribution architec-tures, including edge-fog-cloud models, orchestration architecture, and serverless computing, and analyze the decision-making variables and optimization algorithms to assess the efficiency of edge offloading. Finally, the work contributes to a comprehensive understanding of the edge offloading in smart grid, providing a SWOT analysis to support decision making.Comment: to be submitted to journa

    A Review on Computational Intelligence Techniques in Cloud and Edge Computing

    Get PDF
    Cloud computing (CC) is a centralized computing paradigm that accumulates resources centrally and provides these resources to users through Internet. Although CC holds a large number of resources, it may not be acceptable by real-time mobile applications, as it is usually far away from users geographically. On the other hand, edge computing (EC), which distributes resources to the network edge, enjoys increasing popularity in the applications with low-latency and high-reliability requirements. EC provides resources in a decentralized manner, which can respond to users’ requirements faster than the normal CC, but with limited computing capacities. As both CC and EC are resource-sensitive, several big issues arise, such as how to conduct job scheduling, resource allocation, and task offloading, which significantly influence the performance of the whole system. To tackle these issues, many optimization problems have been formulated. These optimization problems usually have complex properties, such as non-convexity and NP-hardness, which may not be addressed by the traditional convex optimization-based solutions. Computational intelligence (CI), consisting of a set of nature-inspired computational approaches, recently exhibits great potential in addressing these optimization problems in CC and EC. This article provides an overview of research problems in CC and EC and recent progresses in addressing them with the help of CI techniques. Informative discussions and future research trends are also presented, with the aim of offering insights to the readers and motivating new research directions

    Towards More Efficient 5G Networks via Dynamic Traffic Scheduling

    Get PDF
    Department of Electrical EngineeringThe 5G communications adopt various advanced technologies such as mobile edge computing and unlicensed band operations, to meet the goal of 5G services such as enhanced Mobile Broadband (eMBB) and Ultra Reliable Low Latency Communications (URLLC). Specifically, by placing the cloud resources at the edge of the radio access network, so-called mobile edge cloud, mobile devices can be served with lower latency compared to traditional remote-cloud based services. In addition, by utilizing unlicensed spectrum, 5G can mitigate the scarce spectrum resources problem thus leading to realize higher throughput services. To enhance user-experienced service quality, however, aforementioned approaches should be more fine-tuned by considering various network performance metrics altogether. For instance, the mechanisms for mobile edge computing, e.g., computation offloading to the edge cloud, should not be optimized in a specific metric's perspective like latency, since actual user satisfaction comes from multi-domain factors including latency, throughput, monetary cost, etc. Moreover, blindly combining unlicensed spectrum resources with licensed ones does not always guarantee the performance enhancement, since it is crucial for unlicensed band operations to achieve peaceful but efficient coexistence with other competing technologies (e.g., Wi-Fi). This dissertation proposes a focused resource management framework for more efficient 5G network operations as follows. First, Quality-of-Experience is adopted to quantify user satisfaction in mobile edge computing, and the optimal transmission scheduling algorithm is derived to maximize user QoE in computation offloading scenarios. Next, regarding unlicensed band operations, two efficient mechanisms are introduced to improve the coexistence performance between LTE-LAA and Wi-Fi networks. In particular, we develop a dynamic energy-detection thresholding algorithm for LTE-LAA so that LTE-LAA devices can detect Wi-Fi frames in a lightweight way. In addition, we propose AI-based network configuration for an LTE-LAA network with which an LTE-LAA operator can fine-tune its coexistence parameters (e.g., CAA threshold) to better protect coexisting Wi-Fi while achieving enhanced performance than the legacy LTE-LAA in the standards. Via extensive evaluations using computer simulations and a USRP-based testbed, we have verified that the proposed framework can enhance the efficiency of 5G.clos

    Towards a proper service placement in combined Fog-to-Cloud (F2C) architectures

    Get PDF
    The Internet of Things (IoT) has empowered the development of a plethora of new services, fueled by the deployment of devices located at the edge, providing multiple capabilities in terms of connectivity as well as in data collection and processing. With the inception of the Fog Computing paradigm, aimed at diminishing the distance between edge-devices and the IT premises running IoT services, the perceived service latency and even the security risks can be reduced, while simultaneously optimizing the network usage. When put together, Fog and Cloud computing (recently coined as fog-to-cloud, F2C) can be used to maximize the advantages of future computer systems, with the whole greater than the sum of individual parts. However, the specifics associated with cloud and fog resource models require new strategies to manage the mapping of novel IoT services into the suitable resources. Despite few proposals for service offloading between fog and cloud systems are slowly gaining momentum in the research community, many issues in service placement, both when the service is ready to be executed admitted as well as when the service is offloaded from Cloud to Fog, and vice-versa, are new and largely unsolved. In this paper, we provide some insights into the relevant features about service placement in F2C scenarios, highlighting main challenges in current systems towards the deployment of the next-generation IoT servicesPostprint (author's final draft

    DeepBrain: Experimental Evaluation of Cloud-Based Computation Offloading and Edge Computing in the Internet-of-Drones for Deep Learning Applications

    Get PDF
    This article belongs to the Special Issue Time-Sensitive Networks for Unmanned Aircraft SystemsUnmanned Aerial Vehicles (UAVs) have been very effective in collecting aerial images data for various Internet-of-Things (IoT)/smart cities applications such as search and rescue, surveillance, vehicle detection, counting, intelligent transportation systems, to name a few. However, the real-time processing of collected data on edge in the context of the Internet-of-Drones remains an open challenge because UAVs have limited energy capabilities, while computer vision techniquesconsume excessive energy and require abundant resources. This fact is even more critical when deep learning algorithms, such as convolutional neural networks (CNNs), are used for classification and detection. In this paper, we first propose a system architecture of computation offloading for Internet-connected drones. Then, we conduct a comprehensive experimental study to evaluate the performance in terms of energy, bandwidth, and delay of the cloud computation offloading approach versus the edge computing approach of deep learning applications in the context of UAVs. In particular, we investigate the tradeoff between the communication cost and the computation of the two candidate approaches experimentally. The main results demonstrate that the computation offloading approach allows us to provide much higher throughput (i.e., frames per second) as compared to the edge computing approach, despite the larger communication delays.info:eu-repo/semantics/publishedVersio
    corecore