909 research outputs found

    Cost functions in optical burst-switched networks

    Get PDF
    Optical Burst Switching (OBS) is a new paradigm for an all-optical Internet. It combines the best features of Optical Circuit Switching (OCS) and Optical Packet Switching (OPS) while avoidmg the mam problems associated with those networks .Namely, it offers good granularity, but its hardware requirements are lower than those of OPS. In a backbone network, low loss ratio is of particular importance. Also, to meet varying user requirements, it should support multiple classes of service. In Optical Burst-Switched networks both these goals are closely related to the way bursts are arranged in channels. Unlike the case of circuit switching, scheduling decisions affect the loss probability of future burst This thesis proposes the idea of a cost function. The cost function is used to judge the quality of a burst arrangement and estimate the probability that this burst will interfere with future bursts. Two applications of the cost functio n are proposed. A scheduling algorithm uses the value of the cost function to optimize the alignment of the new burst with other bursts in a channel, thus minimising the loss ratio. A cost-based burst droppmg algorithm, that can be used as a part of a Quality of Service scheme, drops only those bursts, for which the cost function value indicates that are most likely to cause a contention. Simulation results, performed using a custom-made OBS extension to the ns-2 simulator, show that the cost-based algorithms improve network performanc

    Explanation-Guided Deep Reinforcement Learning for Trustworthy 6G RAN Slicing

    Full text link
    The complexity of emerging sixth-generation (6G) wireless networks has sparked an upsurge in adopting artificial intelligence (AI) to underpin the challenges in network management and resource allocation under strict service level agreements (SLAs). It inaugurates the era of massive network slicing as a distributive technology where tenancy would be extended to the final consumer through pervading the digitalization of vertical immersive use-cases. Despite the promising performance of deep reinforcement learning (DRL) in network slicing, lack of transparency, interpretability, and opaque model concerns impedes users from trusting the DRL agent decisions or predictions. This problem becomes even more pronounced when there is a need to provision highly reliable and secure services. Leveraging eXplainable AI (XAI) in conjunction with an explanation-guided approach, we propose an eXplainable reinforcement learning (XRL) scheme to surmount the opaqueness of black-box DRL. The core concept behind the proposed method is the intrinsic interpretability of the reward hypothesis aiming to encourage DRL agents to learn the best actions for specific network slice states while coping with conflict-prone and complex relations of state-action pairs. To validate the proposed framework, we target a resource allocation optimization problem where multi-agent XRL strives to allocate optimal available radio resources to meet the SLA requirements of slices. Finally, we present numerical results to showcase the superiority of the adopted XRL approach over the DRL baseline. As far as we know, this is the first work that studies the feasibility of an explanation-guided DRL approach in the context of 6G networks.Comment: 6 Pages, 6 figure

    Group Scheduling for MultiChannel in OBS Networks

    Get PDF
    Group scheduling is a scheduling operation of optical burst switching networks in which the burst header packets arriving in each timeslot will schedule their following bursts simultaneously. There have been many proposals for group scheduling (such as OBS-GS, MWIS-OS and LGS), but they consider mainly to schedule the arriving bursts which have the same wavelength on an output data channel. Another suggestion is GreedyOPT which considers the group scheduling for multichannel with the support of full wavelength converters, but it is not optimal. This article proposes another approach of group scheduling which is more optimal and has a linear complexity

    A single-machine scheduling problem with multiple unavailability constraints: A mathematical model and an enhanced variable neighborhood search approach

    Get PDF
    AbstractThis research focuses on a scheduling problem with multiple unavailability periods and distinct due dates. The objective is to minimize the sum of maximum earliness and tardiness of jobs. In order to optimize the problem exactly a mathematical model is proposed. However due to computational difficulties for large instances of the considered problem a modified variable neighborhood search (VNS) is developed. In basic VNS, the searching process to achieve to global optimum or near global optimum solution is totally random, and it is known as one of the weaknesses of this algorithm. To tackle this weakness, a VNS algorithm is combined with a knowledge module. In the proposed VNS, knowledge module extracts the knowledge of good solution and save them in memory and feed it back to the algorithm during the search process. Computational results show that the proposed algorithm is efficient and effective

    DALiuGE: A Graph Execution Framework for Harnessing the Astronomical Data Deluge

    Full text link
    The Data Activated Liu Graph Engine - DALiuGE - is an execution framework for processing large astronomical datasets at a scale required by the Square Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex data reduction pipelines consisting of both data sets and algorithmic components and an implementation run-time to execute such pipelines on distributed resources. By mapping the logical view of a pipeline to its physical realisation, DALiuGE separates the concerns of multiple stakeholders, allowing them to collectively optimise large-scale data processing solutions in a coherent manner. The execution in DALiuGE is data-activated, where each individual data item autonomously triggers the processing on itself. Such decentralisation also makes the execution framework very scalable and flexible, supporting pipeline sizes ranging from less than ten tasks running on a laptop to tens of millions of concurrent tasks on the second fastest supercomputer in the world. DALiuGE has been used in production for reducing interferometry data sets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide Spectral Radioheliograph; and is being developed as the execution framework prototype for the Science Data Processor (SDP) consortium of the Square Kilometre Array (SKA) telescope. This paper presents a technical overview of DALiuGE and discusses case studies from the CHILES and MUSER projects that use DALiuGE to execute production pipelines. In a companion paper, we provide in-depth analysis of DALiuGE's scalability to very large numbers of tasks on two supercomputing facilities.Comment: 31 pages, 12 figures, currently under review by Astronomy and Computin
    corecore