10,485 research outputs found

    Synergetical use of analytical models and machine-learning for data transport abstraction in open optical networks

    Get PDF
    The key-operation to enabling an effective data transport abstraction in open optical line systems (OLS) is the capability to predict the quality of transmission (QoT), that is given by the generalized signal-to-noise ratio (GSNR), including both the effects of the ASE noise and the nonlinear interference (NLI) accumulation. Among the two impairing effects, the estimation of the ASE noise is the most challenging task, because of the spectrally resolved working point of the erbium-doped fiber amplifiers (EDFA) depending on the spectral load, given the overall gain. While, the computation of the NLI is well addressed by mathematical models based on the knowledge of parameters and spectral load of fiber spans. So, the NLI prediction is mainly impaired by the uncertainties on insertion losses an spectral tilting. An accurate and spectrally resolved GSNR estimation enables to optimize the power control and to reliably and automatically deploy lightpaths with minimum margin, consequently maximizing the transmission capacity. We address the potentialities of machine-learning (ML) methods combined with analytic models for the NLI computation to improve the accuracy in the QoT estimation. We also analyze an experimental data-set showing the main uncertainties and addressing the use of ML to predict their effect on the QoT estimation

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Harnessing machine learning for fiber-induced nonlinearity mitigation in long-haul coherent optical OFDM

    Get PDF
    © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has attracted a lot of interest in optical fiber communications due to its simplified digital signal processing (DSP) units, high spectral-efficiency, flexibility, and tolerance to linear impairments. However, CO-OFDM’s high peak-to-average power ratio imposes high vulnerability to fiber-induced non-linearities. DSP-based machine learning has been considered as a promising approach for fiber non-linearity compensation without sacrificing computational complexity. In this paper, we review the existing machine learning approaches for CO-OFDM in a common framework and review the progress in this area with a focus on practical aspects and comparison with benchmark DSP solutions.Peer reviewe

    Intelligent optical performance monitor using multi-task learning based artificial neural network

    Full text link
    An intelligent optical performance monitor using multi-task learning based artificial neural network (MTL-ANN) is designed for simultaneous OSNR monitoring and modulation format identification (MFI). Signals' amplitude histograms (AHs) after constant module algorithm are selected as the input features for MTL-ANN. The experimental results of 20-Gbaud NRZ-OOK, PAM4 and PAM8 signals demonstrate that MTL-ANN could achieve OSNR monitoring and MFI simultaneously with higher accuracy and stability compared with single-task learning based ANNs (STL-ANNs). The results show an MFI accuracy of 100% and OSNR monitoring root-mean-square error of 0.63 dB for the three modulation formats under consideration. Furthermore, the number of neuron needed for the single MTL-ANN is almost the half of STL-ANN, which enables reduced-complexity optical performance monitoring devices for real-time performance monitoring

    Rotationally-invariant mapping of scalar and orientational metrics of neuronal microstructure with diffusion MRI

    Full text link
    We develop a general analytical and numerical framework for estimating intra- and extra-neurite water fractions and diffusion coefficients, as well as neurite orientational dispersion, in each imaging voxel. By employing a set of rotational invariants and their expansion in the powers of diffusion weighting, we analytically uncover the nontrivial topology of the parameter estimation landscape, showing that multiple branches of parameters describe the measurement almost equally well, with only one of them corresponding to the biophysical reality. A comprehensive acquisition shows that the branch choice varies across the brain. Our framework reveals hidden degeneracies in MRI parameter estimation for neuronal tissue, provides microstructural and orientational maps in the whole brain without constraints or priors, and connects modern biophysical modeling with clinical MRI.Comment: 25 pages, 12 figures, elsarticle two-colum

    Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications

    Get PDF
    The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version

    A survey on fiber nonlinearity compensation for 400 Gbps and beyond optical communication systems

    Full text link
    Optical communication systems represent the backbone of modern communication networks. Since their deployment, different fiber technologies have been used to deal with optical fiber impairments such as dispersion-shifted fibers and dispersion-compensation fibers. In recent years, thanks to the introduction of coherent detection based systems, fiber impairments can be mitigated using digital signal processing (DSP) algorithms. Coherent systems are used in the current 100 Gbps wavelength-division multiplexing (WDM) standard technology. They allow the increase of spectral efficiency by using multi-level modulation formats, and are combined with DSP techniques to combat the linear fiber distortions. In addition to linear impairments, the next generation 400 Gbps/1 Tbps WDM systems are also more affected by the fiber nonlinearity due to the Kerr effect. At high input power, the fiber nonlinear effects become more important and their compensation is required to improve the transmission performance. Several approaches have been proposed to deal with the fiber nonlinearity. In this paper, after a brief description of the Kerr-induced nonlinear effects, a survey on the fiber nonlinearity compensation (NLC) techniques is provided. We focus on the well-known NLC techniques and discuss their performance, as well as their implementation and complexity. An extension of the inter-subcarrier nonlinear interference canceler approach is also proposed. A performance evaluation of the well-known NLC techniques and the proposed approach is provided in the context of Nyquist and super-Nyquist superchannel systems.Comment: Accepted in the IEEE Communications Surveys and Tutorial
    • …
    corecore