28 research outputs found

    Fair bandwidth allocation algorithm for PONS based on network utility maximization

    Get PDF
    Network utility maximization (NUM) models have been successfully applied to address multiple resource- allocation problems in communication networks. This paper explores, for the first time to our knowledge, their application to modeling the bandwidth-allocation problem in passive optical networks (PONs) and long-reach PONs. Using the NUM model, we propose the FEx-DBA (fair excess-dynamic bandwidth allocation) algorithm, a new DBA scheme to allow a fair and efficient allocation of the upstream channel capacity. The NUM framework provides the mathematical support to formally define the fairness concept in the resource allocation and the guidelines to devise FEx-DBA. A simulation study is conducted, whereby FEx-DBA is compared to a state-of-the-art proposal. We show that FEx-DBA (i) provides bandwidth guarantees to the users according to the service level agreement (SLA) contracted and fairly distributes the excess bandwidths among them; (ii) has a stable response and fast convergence when traffic or SLAs change, avoiding the oscillations appearing in other proposals; (iii) improves average delay and jitter measures; and (iv) only depends on a reduced set of parameters, which can be easily tuned.This work has been funded by Spanish Ministry of Science and Innovation (TEC2014-53071-C3-2-P and TEC2015-71932-REDT)

    PID controller based on a self-adaptive neural network to ensure qos bandwidth requirements in passive optical networks

    Get PDF
    Producción CientíficaIn this paper, a proportional-integral-derivative (PID) controller integrated with a neural network (NN) is proposed to ensure quality of service (QoS) bandwidth requirements in passive optical networks (PONs). To the best of our knowledge, this is the first time an approach that implements a NN to tune a PID to deal with QoS in PONs is used. In contrast to other tuning techniques such as Ziegler-Nichols or genetic algorithms (GA), our proposal allows a real-time adjustment of the tuning parameters according to the network conditions. Thus, the new algorithm provides an online control of the tuning process unlike the ZN and GA techniques, whose tuning parameters are calculated offline. The algorithm, called neural network service level PID (NNSPID), guarantees minimum bandwidth levels to users depending on their service level agreement, and it is compared with a tuning technique based on genetic algorithms (GASPID). The simulation study demonstrates that NN-SPID continuously adapts the tuning parameters, achieving lower fluctuations than GA-SPID in the allocation process. As a consequence, it provides a more stable response than GA-SPID since it needs to launch the GA to obtain new tuning values. Furthermore, NN-SPID guarantees the minimum bandwidth levels faster than GA-SPID. Finally, NN-SPID is more robust than GA-SPID under real-time changes of the guaranteed bandwidth levels, as GA-SPID shows high fluctuations in the allocated bandwidth, especially just after any change is made.Ministerio de Ciencia e Innovación (Projects TEC2014-53071-C3-2-P and TEC2015-71932-REDT

    Artificial intelligence (AI) methods in optical networks: A comprehensive survey

    Get PDF
    Producción CientíficaArtificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.Ministerio de Economía, Industria y Competitividad (Project EC2014-53071-C3-2-P, TEC2015-71932-REDT

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Next generation control of transport networks

    Get PDF
    It is widely understood by telecom operators and industry analysts that bandwidth demand is increasing dramatically, year on year, with typical growth figures of 50% for Internet-based traffic [5]. This trend means that the consumers will have both a wide variety of devices attaching to their networks and a range of high bandwidth service requirements. The corresponding impact is the effect on the traffic engineered network (often referred to as the “transport network”) to ensure that the current rate of growth of network traffic is supported and meets predicted future demands. As traffic demands increase and newer services continuously arise, novel network elements are needed to provide more flexibility, scalability, resilience, and adaptability to today’s transport network. The transport network provides transparent traffic engineered communication of user, application, and device traffic between attached clients (software and hardware) and establishing and maintaining point-to-point or point-to-multipoint connections. The research documented in this thesis was based on three initial research questions posed while performing research at British Telecom research labs and investigating control of transport networks of future transport networks: 1. How can we meet Internet bandwidth growth yet minimise network costs? 2. Which enabling network technologies might be leveraged to control network layers and functions cooperatively, instead of separated network layer and technology control? 3. Is it possible to utilise both centralised and distributed control mechanisms for automation and traffic optimisation? This thesis aims to provide the classification, motivation, invention, and evolution of a next generation control framework for transport networks, and special consideration of delivering broadcast video traffic to UK subscribers. The document outlines pertinent telecoms technology and current art, how requirements I gathered, and research I conducted, and by which the transport control framework functional components are identified and selected, and by which method the architecture was implemented and applied to key research projects requiring next generation control capabilities, both at British Telecom and the wider research community. Finally, in the closing chapters, the thesis outlines the next steps for ongoing research and development of the transport network framework and key areas for further study

    Radio resource scheduling in homogeneous coordinated multi-point joint transmission of future mobile networks

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy (PhD)The demand of mobile users with high data-rate services continues to increase. To satisfy the needs of such mobile users, operators must continue to enhance their existing networks. The radio interface is a well-known bottleneck because the radio spectrum is limited and therefore expensive. Efficient use of the radio spectrum is, therefore, very important. To utilise the spectrum efficiently, any of the channels can be used simultaneously in any of the cells as long as interference generated by the base stations using the same channels is below an acceptable level. In cellular networks based on Orthogonal Frequency Division Multiple Access (OFDMA), inter-cell interference reduces the performance of the link throughput to users close to the cell edge. To improve the performance of cell-edge users, a technique called Coordinated Multi-Point (CoMP) transmission is being researched for use in the next generation of cellular networks. For a network to benefit from CoMP, its utilisation of resources should be scheduled efficiently. The thesis focuses on the resource scheduling algorithm development for CoMP joint transmission scheme in OFDMA-based cellular networks. In addition to the algorithm, the thesis provides an analytical framework for the performance evaluation of the CoMP technique. From the system level simulation results, it has been shown that the proposed resource scheduling based on a joint maximum throughput provides higher spectral efficiency compared with a joint proportional fairness scheduling algorithm under different traffic loads in the network and under different criteria of making cell-edge decision. A hybrid model combining the analytical and simulation approaches has been developed to evaluate the average system throughput. It has been found that the results of the hybrid model are in line with the simulation based results. The benefit of the model is that the throughput of any possible call state in the system can be evaluated. Two empirical path loss models in an indoor-to-outdoor environment of a residential area have been developed based on the measurement data at carrier frequencies 900 MHz and 2 GHz. The models can be used as analytical expressions to estimate the level of interference by a femtocell to a macrocell user in link-level simulations

    Memorias del Congreso Argentino en Ciencias de la Computación - CACIC 2021

    Get PDF
    Trabajos presentados en el XXVII Congreso Argentino de Ciencias de la Computación (CACIC), celebrado en la ciudad de Salta los días 4 al 8 de octubre de 2021, organizado por la Red de Universidades con Carreras en Informática (RedUNCI) y la Universidad Nacional de Salta (UNSA).Red de Universidades con Carreras en Informátic
    corecore