384 research outputs found

    Edge Offloading in Smart Grid

    Full text link
    The energy transition supports the shift towards more sustainable energy alternatives, paving towards decentralized smart grids, where the energy is generated closer to the point of use. The decentralized smart grids foresee novel data-driven low latency applications for improving resilience and responsiveness, such as peer-to-peer energy trading, microgrid control, fault detection, or demand response. However, the traditional cloud-based smart grid architectures are unable to meet the requirements of the new emerging applications such as low latency and high-reliability thus alternative architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge offloading can play a pivotal role for the next-generation smart grid AI applications because it enables the efficient utilization of computing resources and addresses the challenges of increasing data generated by IoT devices, optimizing the response time, energy consumption, and network performance. However, a comprehensive overview of the current state of research is needed to support sound decisions regarding energy-related applications offloading from cloud to fog or edge, focusing on smart grid open challenges and potential impacts. In this paper, we delve into smart grid and computational distribution architec-tures, including edge-fog-cloud models, orchestration architecture, and serverless computing, and analyze the decision-making variables and optimization algorithms to assess the efficiency of edge offloading. Finally, the work contributes to a comprehensive understanding of the edge offloading in smart grid, providing a SWOT analysis to support decision making.Comment: to be submitted to journa

    Towards Mobile Edge Computing: Taxonomy, Challenges, Applications and Future Realms

    Get PDF
    The realm of cloud computing has revolutionized access to cloud resources and their utilization and applications over the Internet. However, deploying cloud computing for delay critical applications and reducing the delay in access to the resources are challenging. The Mobile Edge Computing (MEC) paradigm is one of the effective solutions, which brings the cloud computing services to the proximity of the edge network and leverages the available resources. This paper presents a survey of the latest and state-of-the-art algorithms, techniques, and concepts of MEC. The proposed work is unique in that the most novel algorithms are considered, which are not considered by the existing surveys. Moreover, the chosen novel literature of the existing researchers is classified in terms of performance metrics by describing the realms of promising performance and the regions where the margin of improvement exists for future investigation for the future researchers. This also eases the choice of a particular algorithm for a particular application. As compared to the existing surveys, the bibliometric overview is provided, which is further helpful for the researchers, engineers, and scientists for a thorough insight, application selection, and future consideration for improvement. In addition, applications related to the MEC platform are presented. Open research challenges, future directions, and lessons learned in area of the MEC are provided for further future investigation

    An Optimized Multi-Layer Resource Management in Mobile Edge Computing Networks: A Joint Computation Offloading and Caching Solution

    Full text link
    Nowadays, data caching is being used as a high-speed data storage layer in mobile edge computing networks employing flow control methodologies at an exponential rate. This study shows how to discover the best architecture for backhaul networks with caching capability using a distributed offloading technique. This article used a continuous power flow analysis to achieve the optimum load constraints, wherein the power of macro base stations with various caching capacities is supplied by either an intelligent grid network or renewable energy systems. This work proposes ubiquitous connectivity between users at the cell edge and offloading the macro cells so as to provide features the macro cell itself cannot cope with, such as extreme changes in the required user data rate and energy efficiency. The offloading framework is then reformed into a neural weighted framework that considers convergence and Lyapunov instability requirements of mobile-edge computing under Karush Kuhn Tucker optimization restrictions in order to get accurate solutions. The cell-layer performance is analyzed in the boundary and in the center point of the cells. The analytical and simulation results show that the suggested method outperforms other energy-saving techniques. Also, compared to other solutions studied in the literature, the proposed approach shows a two to three times increase in both the throughput of the cell edge users and the aggregate throughput per cluster

    Smart Decision-Making via Edge Intelligence for Smart Cities

    Get PDF
    Smart cities are an ambitious vision for future urban environments. The ultimate aim of smart cities is to use modern technology to optimize city resources and operations while improving overall quality-of-life of its citizens. Realizing this ambitious vision will require embracing advancements in information communication technology, data analysis, and other technologies. Because smart cities naturally produce vast amounts of data, recent artificial intelligence (AI) techniques are of interest due to their ability to transform raw data into insightful knowledge to inform decisions (e.g., using live road traffic data to control traffic lights based on current traffic conditions). However, training and providing these AI applications is non-trivial and will require sufficient computing resources. Traditionally, cloud computing infrastructure have been used to process computationally intensive tasks; however, due to the time-sensitivity of many of these smart city applications, novel computing hardware/technologies are required. The recent advent of edge computing provides a promising computing infrastructure to support the needs of the smart cities of tomorrow. Edge computing pushes compute resources close to end users to provide reduced latency and improved scalability — making it a viable candidate to support smart cities. However, it comes with hardware limitations that are necessary to consider. This thesis explores the use of the edge computing paradigm for smart city applications and how to make efficient, smart decisions related to their available resources. This is done while considering the quality-of-service provided to end users. This work can be seen as four parts. First, this work touches on how to optimally place and serve AI-based applications on edge computing infrastructure to maximize quality-of-service to end users. This is cast as an optimization problem and solved with efficient algorithms that approximate the optimal solution. Second, this work investigates the applicability of compression techniques to reduce offloading costs for AI-based applications in edge computing systems. Finally, this thesis then demonstrate how edge computing can support AI-based solutions for smart city applications, namely, smart energy and smart traffic. These applications are approached using the recent paradigm of federated learning. The contributions of this thesis include the design of novel algorithms and system design strategies for placement and scheduling of AI-based services on edge computing systems, formal formulation for trade-offs between delivered AI model performance and latency, compression for offloading decisions for communication reductions, and evaluation of federated learning-based approaches for smart city applications

    Smart performance optimization of energy-aware scheduling model for resource sharing in 5G green communication systems

    Get PDF
    This paper presents an analysis of the performance of the Energy Aware Scheduling Algorithm (EASA) in a 5G green communication system. 5G green communication systems rely on EASA to manage resource sharing. The aim of the proposed model is to improve the efficiency and energy consumption of resource sharing in 5G green communication systems. The main objective is to address the challenges of achieving optimal resource utilization and minimizing energy consumption in these systems. To achieve this goal, the study proposes a novel energy-aware scheduling model that takes into consideration the specific characteristics of 5G green communication systems. This model incorporates intelligent techniques for optimizing resource allocation and scheduling decisions, while also considering energy consumption constraints. The methodology used involves a combination of mathematical analysis and simulation studies. The mathematical analysis is used to formulate the optimization problem and design the scheduling model, while the simulations are used to evaluate its performance in various scenarios. The proposed EASM reached a 91.58% false discovery rate, a 64.33% false omission rate, a 90.62% prevalence threshold, and a 91.23% critical success index. The results demonstrate the effectiveness of the proposed model in terms of reducing energy consumption while maintaining a high level of resource utilization.© 2024 The Authors. The Journal of Engineering published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.fi=vertaisarvioitu|en=peerReviewed

    Optimization Mobility in Sensor Actor Networks: A Comprehensive Framework

    Get PDF
    This work combines theoretical understanding with real-world applications to optimize mobility in sensor-actor networks. An extensive study of the literature examines the field for wireless sensor networks, including everything from sophisticated threat detection of mobile devices to methods for reducing congestion. With TinkerCAD for simulation & Arduino in hardware control, the suggested methodology emphasizes practicality and highlights servo motors as vital parts for actuation. This innovative method incorporates hardware control, providing a concrete connection between theoretical concepts and practical implementations. This methodology offers a comprehensive view of the operational elements of sensor-actor networks by addressing mobility optimization concerns in dynamic situations. The suggested model is demonstrated by Figures 1 through 5 and includes TinkerCAD visualizations, servo control of the motor, and an Arduino setup. The hardware specs and code samples that are provided show how mobility optimization is actually achieved in practice. The findings highlight the importance of real-world applications, improving the comprehension and suitability of sensor-actor systems in dynamic environments. This paper highlights the value of practical experimentation by providing a novel way for sensor-actor network mobility optimization. For those in the field looking to improve the efficiency and efficacy of dynamic networked systems, the suggested model is a useful tool

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks

    A Decade of Research in Fog computing: Relevance, Challenges, and Future Directions

    Full text link
    Recent developments in the Internet of Things (IoT) and real-time applications, have led to the unprecedented growth in the connected devices and their generated data. Traditionally, this sensor data is transferred and processed at the cloud, and the control signals are sent back to the relevant actuators, as part of the IoT applications. This cloud-centric IoT model, resulted in increased latencies and network load, and compromised privacy. To address these problems, Fog Computing was coined by Cisco in 2012, a decade ago, which utilizes proximal computational resources for processing the sensor data. Ever since its proposal, fog computing has attracted significant attention and the research fraternity focused at addressing different challenges such as fog frameworks, simulators, resource management, placement strategies, quality of service aspects, fog economics etc. However, after a decade of research, we still do not see large-scale deployments of public/private fog networks, which can be utilized in realizing interesting IoT applications. In the literature, we only see pilot case studies and small-scale testbeds, and utilization of simulators for demonstrating scale of the specified models addressing the respective technical challenges. There are several reasons for this, and most importantly, fog computing did not present a clear business case for the companies and participating individuals yet. This paper summarizes the technical, non-functional and economic challenges, which have been posing hurdles in adopting fog computing, by consolidating them across different clusters. The paper also summarizes the relevant academic and industrial contributions in addressing these challenges and provides future research directions in realizing real-time fog computing applications, also considering the emerging trends such as federated learning and quantum computing.Comment: Accepted for publication at Wiley Software: Practice and Experience journa
    corecore