1,312 research outputs found

    Cooperative end-edge-cloud computing and resource allocation for digital twin enabled 6G industrial IoT

    Get PDF
    End-edge-cloud (EEC) collaborative computing is regarded as one of the most promising technologies for the Industrial Internet of Things (IIoT). It offers effective solutions for managing computationally intensive and delay-sensitive tasks efficiently. Indeed, achieving intelligent manufacturing in the context of 6G networks requires the development of efficient resource scheduling schemes. However, improving the quality of service and resource management in the face of challenges like time-varying physical operating environments of IIoT, task heterogeneity, and the coupling of different resource types is undoubtedly a complex task. In this work, we propose a digital twin (DT) assisted EEC collaborative computing scheme, where DT is utilized to monitor the physical operating environment in real-time and determine the optimal strategy, and the potential deviation between the real values and DT estimates is also considered. We aim to minimize the system cost by optimizing device association, offloading mode, bandwidth allocation, and task split ratio. Our optimization is constrained by the maximum tolerable latency of the task while considering both latency and energy consumption. To solve the collaborative computation and resource allocation (CCRA) problem in the EEC, we propose an algorithm with DT based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG), where each user end (UE) in DT operates as an independent agent to determine the optimum offloading decision autonomously. Simulation results demonstrate the effectiveness of the proposed scheme, which can significantly improve the task success rate compared to benchmark schemes, while reducing the latency and energy consumption of task offloading with the assistance of DT

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Multi-objective resource optimization in space-aerial-ground-sea integrated networks

    Get PDF
    Space-air-ground-sea integrated (SAGSI) networks are envisioned to connect satellite, aerial, ground, and sea networks to provide connectivity everywhere and all the time in sixth-generation (6G) networks. However, the success of SAGSI networks is constrained by several challenges including resource optimization when the users have diverse requirements and applications. We present a comprehensive review of SAGSI networks from a resource optimization perspective. We discuss use case scenarios and possible applications of SAGSI networks. The resource optimization discussion considers the challenges associated with SAGSI networks. In our review, we categorized resource optimization techniques based on throughput and capacity maximization, delay minimization, energy consumption, task offloading, task scheduling, resource allocation or utilization, network operation cost, outage probability, and the average age of information, joint optimization (data rate difference, storage or caching, CPU cycle frequency), the overall performance of network and performance degradation, software-defined networking, and intelligent surveillance and relay communication. We then formulate a mathematical framework for maximizing energy efficiency, resource utilization, and user association. We optimize user association while satisfying the constraints of transmit power, data rate, and user association with priority. The binary decision variable is used to associate users with system resources. Since the decision variable is binary and constraints are linear, the formulated problem is a binary linear programming problem. Based on our formulated framework, we simulate and analyze the performance of three different algorithms (branch and bound algorithm, interior point method, and barrier simplex algorithm) and compare the results. Simulation results show that the branch and bound algorithm shows the best results, so this is our benchmark algorithm. The complexity of branch and bound increases exponentially as the number of users and stations increases in the SAGSI network. We got comparable results for the interior point method and barrier simplex algorithm to the benchmark algorithm with low complexity. Finally, we discuss future research directions and challenges of resource optimization in SAGSI networks

    Investigating the Effects of Network Dynamics on Quality of Delivery Prediction and Monitoring for Video Delivery Networks

    Get PDF
    Video streaming over the Internet requires an optimized delivery system given the advances in network architecture, for example, Software Defined Networks. Machine Learning (ML) models have been deployed in an attempt to predict the quality of the video streams. Some of these efforts have considered the prediction of Quality of Delivery (QoD) metrics of the video stream in an effort to measure the quality of the video stream from the network perspective. In most cases, these models have either treated the ML algorithms as black-boxes or failed to capture the network dynamics of the associated video streams. This PhD investigates the effects of network dynamics in QoD prediction using ML techniques. The hypothesis that this thesis investigates is that ML techniques that model the underlying network dynamics achieve accurate QoD and video quality predictions and measurements. The thesis results demonstrate that the proposed techniques offer performance gains over approaches that fail to consider network dynamics. This thesis results highlight that adopting the correct model by modelling the dynamics of the network infrastructure is crucial to the accuracy of the ML predictions. These results are significant as they demonstrate that improved performance is achieved at no additional computational or storage cost. These techniques can help the network manager, data center operatives and video service providers take proactive and corrective actions for improved network efficiency and effectiveness

    SAgric-IoT: an IoT-based platform and deep learning for greenhouse monitoring

    Get PDF
    The Internet of Things (IoT) and convolutional neural networks (CNN) integration is a growing topic of interest for researchers as a technology that will contribute to transforming agriculture. IoT will enable farmers to decide and act based on data collected from sensor nodes regarding field conditions and not purely based on experience, thus minimizing the wastage of supplies (seeds, water, pesticide, and fumigants). On the other hand, CNN complements monitoring systems with tasks such as the early detection of crop diseases or predicting the number of consumable resources and supplies (water, fertilizers) needed to increase productivity. This paper proposes SAgric-IoT, a technology platform based on IoT and CNN for precision agriculture, to monitor environmental and physical variables and provide early disease detection while automatically controlling the irrigation and fertilization in greenhouses. The results show SAgric-IoT is a reliable IoT platform with a low packet loss level that considerably reduces energy consumption and has a disease identification detection accuracy and classification process of over 90%

    Resource Allocation in Networking and Computing Systems: A Security and Dependability Perspective

    Get PDF
    In recent years, there has been a trend to integrate networking and computing systems, whose management is getting increasingly complex. Resource allocation is one of the crucial aspects of managing such systems and is affected by this increased complexity. Resource allocation strategies aim to effectively maximize performance, system utilization, and profit by considering virtualization technologies, heterogeneous resources, context awareness, and other features. In such complex scenario, security and dependability are vital concerns that need to be considered in future computing and networking systems in order to provide the future advanced services, such as mission-critical applications. This paper provides a comprehensive survey of existing literature that considers security and dependability for resource allocation in computing and networking systems. The current research works are categorized by considering the allocated type of resources for different technologies, scenarios, issues, attributes, and solutions. The paper presents the research works on resource allocation that includes security and dependability, both singularly and jointly. The future research directions on resource allocation are also discussed. The paper shows how there are only a few works that, even singularly, consider security and dependability in resource allocation in the future computing and networking systems and highlights the importance of jointly considering security and dependability and the need for intelligent, adaptive and robust solutions. This paper aims to help the researchers effectively consider security and dependability in future networking and computing systems.publishedVersio

    Device-Edge Cooperative Fine-Tuning of Foundation Models as a 6G Service

    Full text link
    Foundation models (FoMos), referring to large-scale AI models, possess human-like capabilities and are able to perform competitively in the domain of human intelligence. The breakthrough in FoMos has inspired researchers to deploy such models in the sixth-generation (6G) mobile networks for automating a broad range of tasks in next-generation mobile applications. While the sizes of FoMos are reaching their peaks, their next phase is expected to focus on fine-tuning the models to specific downstream tasks. This inspires us to propose the vision of FoMo fine-tuning as a 6G service. Its key feature is the exploitation of existing parameter-efficient fine-tuning (PEFT) techniques to tweak only a small fraction of model weights for a FoMo to become customized for a specific task. To materialize the said vision, we survey the state-of-the-art PEFT and then present a novel device-edge fine-tuning (DEFT) framework for providing efficient and privacy-preserving fine-tuning services at the 6G network edge. The framework consists of the following comprehensive set of techniques: 1) Control of fine-tuning parameter sizes in different transformer blocks of a FoMo; 2) Over-the-air computation for realizing neural connections in DEFT; 3) Federated DEFT in a multi-device system by downloading a FoMo emulator or gradients; 4) On-the-fly prompt-ensemble tuning; 5) Device-to-device prompt transfer among devices. Experiments are conducted using pre-trained FoMos with up to 11 billion parameters to demonstrate the effectiveness of DEFT techniques. The article is concluded by presenting future research opportunities.Comment: 13 pages, 6 figure

    Heterogeneous Acceleration for 5G New Radio Channel Modelling Using FPGAs and GPUs

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    • …
    corecore