640 research outputs found
Cooperative end-edge-cloud computing and resource allocation for digital twin enabled 6G industrial IoT
End-edge-cloud (EEC) collaborative computing is regarded as one of the most promising technologies for the Industrial Internet of Things (IIoT). It offers effective solutions for managing computationally intensive and delay-sensitive tasks efficiently. Indeed, achieving intelligent manufacturing in the context of 6G networks requires the development of efficient resource scheduling schemes. However, improving the quality of service and resource management in the face of challenges like time-varying physical operating environments of IIoT, task heterogeneity, and the coupling of different resource types is undoubtedly a complex task. In this work, we propose a digital twin (DT) assisted EEC collaborative computing scheme, where DT is utilized to monitor the physical operating environment in real-time and determine the optimal strategy, and the potential deviation between the real values and DT estimates is also considered. We aim to minimize the system cost by optimizing device association, offloading mode, bandwidth allocation, and task split ratio. Our optimization is constrained by the maximum tolerable latency of the task while considering both latency and energy consumption. To solve the collaborative computation and resource allocation (CCRA) problem in the EEC, we propose an algorithm with DT based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG), where each user end (UE) in DT operates as an independent agent to determine the optimum offloading decision autonomously. Simulation results demonstrate the effectiveness of the proposed scheme, which can significantly improve the task success rate compared to benchmark schemes, while reducing the latency and energy consumption of task offloading with the assistance of DT
Distributed Digital Twin Migration in Multi-tier Computing Systems
At the network edges, the multi-tier computing framework provides mobile users with efficient cloud-like computing and signal processing capabilities. Deploying digital twins in the multi-tier computing system helps to realize ultra-reliable and low-latency interactions between users and their virtual objects. Considering users in the system may roam between edge servers with limited coverage and increase the data synchronization latency to their digital twins, it is crucial to address the digital twin migration problem to enable real-time synchronization between digital twins and users. To this end, we formulate a joint digital twin migration, communication and computation resource management problem to minimize the data synchronization latency, where the time-varying network states and user mobility are considered. By decoupling edge servers under a deterministic migration strategy, we first derive the optimal communication and computation resource management policies at each server using convex optimization methods. For the digital twin migration problem between different servers, we transform it as a decentralized partially observable Markov decision process (Dec-POMDP). To solve this problem, we propose a novel agent-contribution-enabled multi-agent reinforcement learning (AC-MARL) algorithm to enable distributed digital twin migration for users, in which the counterfactual baseline method is adopted to characterize the contribution of each agent and facilitate cooperation among agents. In addition, we utilize embedding matrices to code agents' actions and states to release the scalability issue under the high dimensional state in AC-MARL. Simulation results based on two real-world taxi mobility trace datasets show that the proposed digital twin migration scheme is able to reduce 23%-30% data synchronization latency for users compared to the benchmark schemes
Recommended from our members
Utility-oriented optimization for video streaming in UAV-aided MEC network: a DRL approach
The integration of unmanned aerial vehicles (UAVs) in future communication networks has received great attention, and it plays an essential role in many applications, such as military reconnaissance, fire monitoring, etc. In this paper, we consider a UAV-aided video transmission system based on mobile edge computing (MEC). Considering the short latency requirements, the UAV acts as a MEC server to transcode the videos and as a relay to forward the transcoded videos to the ground base station. Subject to constraints on discrete variables and short latency, we aim to maximize the cumulative utility by jointly optimizing the power allocation, video transcoding policy, computational resources allocation, and UAV flight trajectory. The above non-convex optimization problem is modeled as a Markov decision process (MDP) and solved by a deep deterministic policy gradient (DDPG) algorithm to realize continuous action control by policy iteration. Simulation results show that the DDPG algorithm performs better than deep Q-learning network algorithm (DQN) and actor-critic (AC) algorithm
Cybersecurity in Motion: A Survey of Challenges and Requirements for Future Test Facilities of CAVs
The way we travel is changing rapidly and Cooperative Intelligent Transportation Systems (C-ITSs) are at the forefront of this evolution. However, the adoption of C-ITSs introduces new risks and challenges, making cybersecurity a top priority for ensuring safety and reliability. Building on this premise, this paper introduces an envisaged Cybersecurity Centre of Excellence (CSCE) designed to bolster researching, testing, and evaluating the cybersecurity of C-ITSs. We explore the design, functionality, and challenges of CSCE's testing facilities, outlining the technological, security, and societal requirements. Through a thorough survey and analysis, we assess the effectiveness of these systems in detecting and mitigating potential threats, highlighting their flexibility to adapt to future C-ITSs. Finally, we identify current unresolved challenges in various C-ITS domains, with the aim of motivating further research into the cybersecurity of C-ITSs
Recommended from our members
Integration of 6Â G signal processing, communication, and computing based on information timeliness-aware digital twin
6 G has emerged as a feasible solution to enable intelligent electric vehicle (EV) energy management. It can be further combined with digital twin (DT) to optimize resource management under unobservable information. However, the lack of reliable information timeliness guarantee increases DT inconsistency and undermines resource management optimality. To address this challenge, we investigate DT-empowered resource management from the perspective of age of information (AoI) optimization. We utilize AoI as an effective information timeliness metric to measure DT consistency, and construct an AoI-optimal DT (AoIo-DT) to assist resource management by providing more accurate state estimates. A joint optimization algorithm of signal processing, communication, and computing integration based on AoI-aware deep actor critic (DAC) with DT assistance is proposed to achieve balanced tradeoff between DT consistency and precision improvement of EV energy management. It further improves learning convergence and optimality of DAC by enforcing training with data samples of smaller AoI. Numerical results verify its performance gain in AoI minimization and EV energy management optimization
Modern computing: Vision and challenges
Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
- …