5 research outputs found

    A DRL-Based Task Offloading Scheme for Server Decision-Making in Multi-Access Edge Computing

    No full text
    Multi-access edge computing (MEC), based on hierarchical cloud computing, offers abundant resources to support the next-generation Internet of Things network. However, several critical challenges, including offloading methods, network dynamics, resource diversity, and server decision-making, remain open. Regarding offloading, most conventional approaches have neglected or oversimplified multi-MEC server scenarios, fixating on single-MEC instances. This myopic focus fails to adapt to computational offloading during MEC server overload, rendering such methods sub-optimal for real-world MEC deployments. To address this deficiency, we propose a solution that employs a deep reinforcement learning-based soft actor-critic (SAC) approach to compute offloading and facilitate MEC server decision-making in multi-user, multi-MEC server environments. Numerical experiments were conducted to evaluate the performance of our proposed solution. The results demonstrate that our approach significantly reduces latency, enhances energy efficiency, and achieves rapid and stable convergence, thereby highlighting the algorithm’s superior performance over existing methods

    Extended Dual Virtual Paths Algorithm Considering the Timing Requirements of IEC61850 Substation Message Types

    No full text

    F-DCS: FMI-Based Distributed CPS Simulation Framework with a Redundancy Reduction Algorithm

    No full text
    A cyber physical system (CPS) is a distributed control system in which the cyber part and physical part are tightly interconnected. A representative CPS is an electric vehicle (EV) composed of a complex system and information and communication technology (ICT), preliminary verified through simulations for performance prediction and a quantitative analysis is essential because an EV comprises a complex CPS. This paper proposes an FMI-based distributed CPS simulation framework (F-DCS) adopting a redundancy reduction algorithm (RRA) for the validation of EV simulation. Furthermore, the proposed algorithm was enhanced to ensure an efficient simulation time and accuracy by predicting and reducing repetition patterns involved during the simulation progress through advances in the distributed CPS simulation. The proposed RRA improves the simulation speed and efficiency by avoiding the repeated portions of a given driving cycle while still maintaining accuracy. To evaluate the performance of the proposed F-DCS, an EV model was simulated by adopting the RRA. The results confirm that the F-DCS with RRA efficiently reduced the simulation time (over 30%) while maintaining a conventional accuracy. Furthermore, the proposed F-DCS was applied to the RRA, which provided results reflecting real-time sensor information

    DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing

    No full text
    Hardware bottlenecks can throttle smart device (SD) performance when executing computation-intensive and delay-sensitive applications. Hence, task offloading can be used to transfer computation-intensive tasks to an external server or processor in Mobile Edge Computing. However, in this approach, the offloaded task can be useless when a process is significantly delayed or a deadline has expired. Due to the uncertain task processing via offloading, it is challenging for each SD to determine its offloading decision (whether to local or remote and drop). This study proposes a deep-reinforcement-learning-based offloading scheduler (DRL-OS) that considers the energy balance in selecting the method for performing a task, such as local computing, offloading, or dropping. The proposed DRL-OS is based on the double dueling deep Q-network (D3QN) and selects an appropriate action by learning the task size, deadline, queue, and residual battery charge. The average battery level, drop rate, and average latency of the DRL-OS were measured in simulations to analyze the scheduler performance. The DRL-OS exhibits a lower average battery level (up to 54%) and lower drop rate (up to 42.5%) than existing schemes. The scheduler also achieves a lower average latency of 0.01 to >0.25 s, despite subtle case-wise differences in the average latency
    corecore