5,423 research outputs found

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Communication and Control in Collaborative UAVs: Recent Advances and Future Trends

    Full text link
    The recent progress in unmanned aerial vehicles (UAV) technology has significantly advanced UAV-based applications for military, civil, and commercial domains. Nevertheless, the challenges of establishing high-speed communication links, flexible control strategies, and developing efficient collaborative decision-making algorithms for a swarm of UAVs limit their autonomy, robustness, and reliability. Thus, a growing focus has been witnessed on collaborative communication to allow a swarm of UAVs to coordinate and communicate autonomously for the cooperative completion of tasks in a short time with improved efficiency and reliability. This work presents a comprehensive review of collaborative communication in a multi-UAV system. We thoroughly discuss the characteristics of intelligent UAVs and their communication and control requirements for autonomous collaboration and coordination. Moreover, we review various UAV collaboration tasks, summarize the applications of UAV swarm networks for dense urban environments and present the use case scenarios to highlight the current developments of UAV-based applications in various domains. Finally, we identify several exciting future research direction that needs attention for advancing the research in collaborative UAVs

    Dynamic Resource Allocation in Industrial Internet of Things (IIoT) using Machine Learning Approaches

    Get PDF
    In today's era of rapid smart equipment development and the Industrial Revolution, the application scenarios for Internet of Things (IoT) technology are expanding widely. The combination of IoT and industrial manufacturing systems gives rise to the Industrial IoT (IIoT). However, due to resource limitations such as computational units and battery capacity in IIoT devices (IIEs), it is crucial to execute computationally intensive tasks efficiently. The dynamic and continuous generation of tasks poses a significant challenge to managing the limited resources in the IIoT environment. This paper proposes a collaborative approach for optimal offloading and resource allocation of highly sensitive industrial IoT tasks. Firstly, the computation-intensive IIoT tasks are transformed into a directed acyclic graph. Then, task offloading is treated as an optimization problem, taking into account the models of processor resources and energy consumption for the offloading scheme. Lastly, a dynamic resource allocation approach is introduced to allocate computing resources to the edge-cloud server for the execution of computation-intensive tasks. The proposed joint offloading and scheduling (JOS) algorithm creates its DAG and prepare a offloading queue. This queue is designed using collaborative q-learning based reinforcement learning and allocate optimal resources to the JOS for execution of tasks present in offloading queue. For this machine learning approach is used to predict and allocate resources. The paper compares conventional and machine learning-based resource allocation methods. The machine learning approach performs better in terms of response time, delay, and energy consumption. The proposed algorithm shows that energy usage increases with task size, and response time increases with the number of users. Among the algorithms compared, JOS has the lowest waiting time, followed by DQN, while Q-learning performs the worst. Based on these findings, the paper recommends adopting the machine learning approach, specifically the JOS algorithm, for joint offloading and resource allocation

    A Survey on Semantic Communications for Intelligent Wireless Networks

    Get PDF
    With deployment of 6G technology, it is envisioned that competitive edge of wireless networks will be sustained and next decade's communication requirements will be stratified. Also 6G will aim to aid development of a human society which is ubiquitous and mobile, simultaneously providing solutions to key challenges such as, coverage, capacity, etc. In addition, 6G will focus on providing intelligent use-cases and applications using higher data-rates over mill-meter waves and Tera-Hertz frequency. However, at higher frequencies multiple non-desired phenomena such as atmospheric absorption, blocking, etc., occur which create a bottleneck owing to resource (spectrum and energy) scarcity. Hence, following same trend of making efforts towards reproducing at receiver, exact information which was sent by transmitter, will result in a never ending need for higher bandwidth. A possible solution to such a challenge lies in semantic communications which focuses on meaning (context) of received data as opposed to only reproducing correct transmitted data. This in turn will require less bandwidth, and will reduce bottleneck due to various undesired phenomenon. In this respect, current article presents a detailed survey on recent technological trends in regard to semantic communications for intelligent wireless networks. We focus on semantic communications architecture including model, and source and channel coding. Next, we detail cross-layer interaction, and various goal-oriented communication applications. We also present overall semantic communications trends in detail, and identify challenges which need timely solutions before practical implementation of semantic communications within 6G wireless technology. Our survey article is an attempt to significantly contribute towards initiating future research directions in area of semantic communications for intelligent 6G wireless networks

    Trustworthy Edge Machine Learning: A Survey

    Full text link
    The convergence of Edge Computing (EC) and Machine Learning (ML), known as Edge Machine Learning (EML), has become a highly regarded research area by utilizing distributed network resources to perform joint training and inference in a cooperative manner. However, EML faces various challenges due to resource constraints, heterogeneous network environments, and diverse service requirements of different applications, which together affect the trustworthiness of EML in the eyes of its stakeholders. This survey provides a comprehensive summary of definitions, attributes, frameworks, techniques, and solutions for trustworthy EML. Specifically, we first emphasize the importance of trustworthy EML within the context of Sixth-Generation (6G) networks. We then discuss the necessity of trustworthiness from the perspective of challenges encountered during deployment and real-world application scenarios. Subsequently, we provide a preliminary definition of trustworthy EML and explore its key attributes. Following this, we introduce fundamental frameworks and enabling technologies for trustworthy EML systems, and provide an in-depth literature review of the latest solutions to enhance trustworthiness of EML. Finally, we discuss corresponding research challenges and open issues.Comment: 27 pages, 7 figures, 10 table

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig
    corecore