112 research outputs found

    {\mu}-DDRL: A QoS-Aware Distributed Deep Reinforcement Learning Technique for Service Offloading in Fog computing Environments

    Full text link
    Fog and Edge computing extend cloud services to the proximity of end users, allowing many Internet of Things (IoT) use cases, particularly latency-critical applications. Smart devices, such as traffic and surveillance cameras, often do not have sufficient resources to process computation-intensive and latency-critical services. Hence, the constituent parts of services can be offloaded to nearby Edge/Fog resources for processing and storage. However, making offloading decisions for complex services in highly stochastic and dynamic environments is an important, yet difficult task. Recently, Deep Reinforcement Learning (DRL) has been used in many complex service offloading problems; however, existing techniques are most suitable for centralized environments, and their convergence to the best-suitable solutions is slow. In addition, constituent parts of services often have predefined data dependencies and quality of service constraints, which further intensify the complexity of service offloading. To solve these issues, we propose a distributed DRL technique following the actor-critic architecture based on Asynchronous Proximal Policy Optimization (APPO) to achieve efficient and diverse distributed experience trajectory generation. Also, we employ PPO clipping and V-trace techniques for off-policy correction for faster convergence to the most suitable service offloading solutions. The results obtained demonstrate that our technique converges quickly, offers high scalability and adaptability, and outperforms its counterparts by improving the execution time of heterogeneous services

    A lightweight secure adaptive approach for internet-of-medical-things healthcare applications in edge-cloud-based networks

    Get PDF
    Mobile-cloud-based healthcare applications are increasingly growing in practice. For instance, healthcare, transport, and shopping applications are designed on the basis of the mobile cloud. For executing mobile-cloud applications, offloading and scheduling are fundamental mechanisms. However, mobile healthcare workflow applications with these methods are widely ignored, demanding applications in various aspects for healthcare monitoring, live healthcare service, and biomedical firms. However, these offloading and scheduling schemes do not consider the workflow applications' execution in their models. This paper develops a lightweight secure efficient offloading scheduling (LSEOS) metaheuristic model. LSEOS consists of light weight, and secure offloading and scheduling methods whose execution offloading delay is less than that of existing methods. The objective of LSEOS is to run workflow applications on other nodes and minimize the delay and security risk in the system. The metaheuristic LSEOS consists of the following components: adaptive deadlines, sorting, and scheduling with neighborhood search schemes. Compared to current strategies for delay and security validation in a model, computational results revealed that the LSEOS outperformed all available offloading and scheduling methods for process applications by 10% security ratio and by 29% regarding delays

    Cloud-Edge Orchestration for the Internet-of-Things: Architecture and AI-Powered Data Processing

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordThe Internet-of-Things (IoT) has been deeply penetrated into a wide range of important and critical sectors, including smart city, water, transportation, manufacturing and smart factory. Massive data are being acquired from a fast growing number of IoT devices. Efficient data processing is a necessity to meet diversified and stringent requirements of many emerging IoT applications. Due to the constrained computation and storage resources, IoT devices have resorted to the powerful cloud computing to process their data. However, centralised and remote cloud computing may introduce unacceptable communication delay since its physical location is far away from IoT devices. Edge cloud has been introduced to overcome this issue by moving the cloud in closer proximity to IoT devices. The orchestration and cooperation between the cloud and the edge provides a crucial computing architecture for IoT applications. Artificial intelligence (AI) is a powerful tool to enable the intelligent orchestration in this architecture. This paper first introduces such a kind of computing architecture from the perspective of IoT applications. It then investigates the state-of-the-art proposals on AI-powered cloud-edge orchestration for the IoT. Finally, a list of potential research challenges and open issues is provided and discussed, which can provide useful resources for carrying out future research in this area.Engineering and Physical Sciences Research Council (EPSRC

    Multi-Layer Latency Aware Workload Assignment of E-Transport IoT Applications in Mobile Sensors Cloudlet Cloud Networks

    Get PDF
    These days, with the emerging developments in wireless communication technologies, such as 6G and 5G and the Internet of Things (IoT) sensors, the usage of E-Transport applications has been increasing progressively. These applications are E-Bus, E-Taxi, self-autonomous car, ETrain and E-Ambulance, and latency-sensitive workloads executed in the distributed cloud network. Nonetheless, many delays present in cloudlet-based cloud networks, such as communication delay, round-trip delay and migration during the workload in the cloudlet-based cloud network. However, the distributed execution of workloads at different computing nodes during the assignment is a challenging task. This paper proposes a novel Multi-layer Latency (e.g., communication delay, roundtrip delay and migration delay) Aware Workload Assignment Strategy (MLAWAS) to allocate the workload of E-Transport applications into optimal computing nodes. MLAWAS consists of different components, such as the Q-Learning aware assignment and the Iterative method, which distribute workload in a dynamic environment where runtime changes of overloading and overheating remain controlled. The migration of workload and VM migration are also part of MLAWAS. The goal is to minimize the average response time of applications. Simulation results demonstrate that MLAWAS earns the minimum average response time as compared with the two other existing strategies.publishedVersio

    A Decade of Research in Fog computing: Relevance, Challenges, and Future Directions

    Full text link
    Recent developments in the Internet of Things (IoT) and real-time applications, have led to the unprecedented growth in the connected devices and their generated data. Traditionally, this sensor data is transferred and processed at the cloud, and the control signals are sent back to the relevant actuators, as part of the IoT applications. This cloud-centric IoT model, resulted in increased latencies and network load, and compromised privacy. To address these problems, Fog Computing was coined by Cisco in 2012, a decade ago, which utilizes proximal computational resources for processing the sensor data. Ever since its proposal, fog computing has attracted significant attention and the research fraternity focused at addressing different challenges such as fog frameworks, simulators, resource management, placement strategies, quality of service aspects, fog economics etc. However, after a decade of research, we still do not see large-scale deployments of public/private fog networks, which can be utilized in realizing interesting IoT applications. In the literature, we only see pilot case studies and small-scale testbeds, and utilization of simulators for demonstrating scale of the specified models addressing the respective technical challenges. There are several reasons for this, and most importantly, fog computing did not present a clear business case for the companies and participating individuals yet. This paper summarizes the technical, non-functional and economic challenges, which have been posing hurdles in adopting fog computing, by consolidating them across different clusters. The paper also summarizes the relevant academic and industrial contributions in addressing these challenges and provides future research directions in realizing real-time fog computing applications, also considering the emerging trends such as federated learning and quantum computing.Comment: Accepted for publication at Wiley Software: Practice and Experience journa
    • …
    corecore