7,702 research outputs found

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity cc as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to c=M+αVc = M + \alpha V, where directly relates to the performance requirement (as agreed upon in a service level agreement) and VV reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud As estimating MM is relatively straightforward, the remaining open issue concerns the estimation of VV. We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of VV is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Squatting and kicking model evaluation for prioritized sliced resource management

    Get PDF
    © Elsevier. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Effective management and allocation of resources remains a challenging paradigm for future large-scale networks such as 5G, especially under a network slicing scenario where the different services will be characterized by differing Quality of Service (QoS) requirements. This makes the task of guaranteeing the QoS levels and maximizing the resource utilization across such networks a complicated task. Moreover, the existing allocation strategies with link sharing tend to suffer from inefficient network resource usage. Therefore, we focused on prioritized sliced resource management in this work and the contributions of this paper can be summarized as formally defining and evaluating a self-provisioned resource management scheme through a smart Squatting and Kicking model (SKM) for multi-class networks. SKM provides the ability to dynamically allocate network resources such as bandwidth, Label Switched Paths (LSP), fiber, slots among others to different user priority classes. Also, SKM can guarantee the correct level of QoS (especially for the higher priority classes) while optimizing the resource utilization across networks. Moreover, given the network slicing scenarios, the proposed scheme can be employed for admission control. Simulation results show that our model achieves 100% resource utilization in bandwidth-constrained environments while guaranteeing higher admission ratio for higher priority classes. From the results, SKM provided 100% acceptance ratio for highest priority class under different input traffic volumes, which, as we articulate, cannot be sufficiently achieved by other existing schemes such as AllocTC-Sharing model due to priority constraints.Peer ReviewedPostprint (author's final draft
    corecore