640 research outputs found

    Cooperative end-edge-cloud computing and resource allocation for digital twin enabled 6G industrial IoT

    Get PDF
    End-edge-cloud (EEC) collaborative computing is regarded as one of the most promising technologies for the Industrial Internet of Things (IIoT). It offers effective solutions for managing computationally intensive and delay-sensitive tasks efficiently. Indeed, achieving intelligent manufacturing in the context of 6G networks requires the development of efficient resource scheduling schemes. However, improving the quality of service and resource management in the face of challenges like time-varying physical operating environments of IIoT, task heterogeneity, and the coupling of different resource types is undoubtedly a complex task. In this work, we propose a digital twin (DT) assisted EEC collaborative computing scheme, where DT is utilized to monitor the physical operating environment in real-time and determine the optimal strategy, and the potential deviation between the real values and DT estimates is also considered. We aim to minimize the system cost by optimizing device association, offloading mode, bandwidth allocation, and task split ratio. Our optimization is constrained by the maximum tolerable latency of the task while considering both latency and energy consumption. To solve the collaborative computation and resource allocation (CCRA) problem in the EEC, we propose an algorithm with DT based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG), where each user end (UE) in DT operates as an independent agent to determine the optimum offloading decision autonomously. Simulation results demonstrate the effectiveness of the proposed scheme, which can significantly improve the task success rate compared to benchmark schemes, while reducing the latency and energy consumption of task offloading with the assistance of DT

    Distributed Digital Twin Migration in Multi-tier Computing Systems

    Get PDF
    At the network edges, the multi-tier computing framework provides mobile users with efficient cloud-like computing and signal processing capabilities. Deploying digital twins in the multi-tier computing system helps to realize ultra-reliable and low-latency interactions between users and their virtual objects. Considering users in the system may roam between edge servers with limited coverage and increase the data synchronization latency to their digital twins, it is crucial to address the digital twin migration problem to enable real-time synchronization between digital twins and users. To this end, we formulate a joint digital twin migration, communication and computation resource management problem to minimize the data synchronization latency, where the time-varying network states and user mobility are considered. By decoupling edge servers under a deterministic migration strategy, we first derive the optimal communication and computation resource management policies at each server using convex optimization methods. For the digital twin migration problem between different servers, we transform it as a decentralized partially observable Markov decision process (Dec-POMDP). To solve this problem, we propose a novel agent-contribution-enabled multi-agent reinforcement learning (AC-MARL) algorithm to enable distributed digital twin migration for users, in which the counterfactual baseline method is adopted to characterize the contribution of each agent and facilitate cooperation among agents. In addition, we utilize embedding matrices to code agents' actions and states to release the scalability issue under the high dimensional state in AC-MARL. Simulation results based on two real-world taxi mobility trace datasets show that the proposed digital twin migration scheme is able to reduce 23%-30% data synchronization latency for users compared to the benchmark schemes

    Cybersecurity in Motion: A Survey of Challenges and Requirements for Future Test Facilities of CAVs

    Get PDF
    The way we travel is changing rapidly and Cooperative Intelligent Transportation Systems (C-ITSs) are at the forefront of this evolution. However, the adoption of C-ITSs introduces new risks and challenges, making cybersecurity a top priority for ensuring safety and reliability. Building on this premise, this paper introduces an envisaged Cybersecurity Centre of Excellence (CSCE) designed to bolster researching, testing, and evaluating the cybersecurity of C-ITSs. We explore the design, functionality, and challenges of CSCE's testing facilities, outlining the technological, security, and societal requirements. Through a thorough survey and analysis, we assess the effectiveness of these systems in detecting and mitigating potential threats, highlighting their flexibility to adapt to future C-ITSs. Finally, we identify current unresolved challenges in various C-ITS domains, with the aim of motivating further research into the cybersecurity of C-ITSs

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
    • …
    corecore