1,936 research outputs found

    Performance Analytics of Cloud Networks

    Get PDF
    As the world becomes more inter-connected and dependent on the Internet, networks become ever more pervasive, and the stresses placed upon them more demanding. Similarly, the expectations of networks to maintain a high level of performance have also increased. Network performance is highly important to any business that operates online, depends on web traffic, runs any part of their infrastructure in a cloud environment, or even hosts their own network infrastructure. Depending upon the exact nature of a network, whether it be local or wide-area, 10 or 100 Gigabit, it will have distinct performance characteristics and it is important for a business or individual operating on the network to understand those performance characteristics and how they affect operations. To better understand our networks, it is necessary that we test them to measure their performance capabilities and track these metrics over time. In our work, we provide an in-depth analysis of how best to run cloud benchmarks to increase our network intelligence and how we can use the results of those benchmarks to predict future performance and identify performance anomalies. To achieve this, we explain how to effectively run cloud benchmarks and propose a scheduling algorithm for running large numbers of cloud benchmarks daily. We then use the performance data gathered from this method to conduct a thorough analysis of the performance characteristics of a cloud network, train neural networks to forecast future throughput based on historical results and detect performance anomalies as they occur

    Anomaly Detection in Smart Meter Data using BiLSTMGAN and Performance Analysis in Fog Network

    Get PDF
    In recent years, the introduction of Smart Grids has provided us with access to a new layer of energy consumption patterns. This new layer of consumption could be analyzed to boost efficiency, prevent power loss, and generate significant economic and environmental benefits. For the analysis of this data layer, a relatively new paradigm has emerged: fog computing or fog networking. Fog computing seeks to offload a cloud computing architecture by decentralizing data processing from the typical cloud computing platform to edge nodes that have less computing power and are closer to the data source. In this study, we intend to implement an anomaly detection algorithm on Smart grid data using a model that employs a generative adversarial network and long term short therm memory to classify anomalies in data from smart grid customers. This algorithm will be evaluated on multiple platforms, including Edge devices and Cloud virtual machines, and run-time metrics will be collected for comparison purposes

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Forecasting for Network Management with Joint Statistical Modelling and Machine Learning

    Get PDF
    Forecasting is a task of ever increasing importance for the operation of mobile networks, where it supports anticipa tory decisions by network intelligence and enables emerging zero touch service and network management models. While current trends in forecasting for anticipatory networking lean towards the systematic adoption of models that are purely based on deep learning approaches, we pave the way for a different strategy to the design of predictors for mobile network environments. Specifically, following recent advances in time series prediction, we consider a hybrid approach that blends statistical modelling and machine learning by means of a joint training process of the two methods. By tailoring this mixed forecasting engine to the specific requirements of network traffic demands, we develop a Thresholded Exponential Smoothing and Recurrent Neural Network (TES-RNN) model. We experiment with TES RNN in two practical network management use cases, i.e., (i) anticipatory allocation of network resources, and (ii) mobile traffic anomaly prediction. Results obtained with extensive traffic workloads collected in an operational mobile network show that TES-RNN can yield substantial performance gains over current state-of-the-art predictors in both applications consideredThis work is partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement no.101017109 DAEMON. This work is partially supported by the Spanish Ministry of Economic Affairs and Digital Transformation and the European Union-NextGenerationEU through the UNICO 5G I+D 6GCLARION-OR and AEON-ZERO. The authors would like to thank Dario Bega for his contribution to developing the forecasting use case I, and Slawek Smyl for his feedback on the baseline ES-RNN model

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Attributes of Big Data Analytics for Data-Driven Decision Making in Cyber-Physical Power Systems

    Get PDF
    Big data analytics is a virtually new term in power system terminology. This concept delves into the way a massive volume of data is acquired, processed, analyzed to extract insight from available data. In particular, big data analytics alludes to applications of artificial intelligence, machine learning techniques, data mining techniques, time-series forecasting methods. Decision-makers in power systems have been long plagued by incapability and weakness of classical methods in dealing with large-scale real practical cases due to the existence of thousands or millions of variables, being time-consuming, the requirement of a high computation burden, divergence of results, unjustifiable errors, and poor accuracy of the model. Big data analytics is an ongoing topic, which pinpoints how to extract insights from these large data sets. The extant article has enumerated the applications of big data analytics in future power systems through several layers from grid-scale to local-scale. Big data analytics has many applications in the areas of smart grid implementation, electricity markets, execution of collaborative operation schemes, enhancement of microgrid operation autonomy, management of electric vehicle operations in smart grids, active distribution network control, district hub system management, multi-agent energy systems, electricity theft detection, stability and security assessment by PMUs, and better exploitation of renewable energy sources. The employment of big data analytics entails some prerequisites, such as the proliferation of IoT-enabled devices, easily-accessible cloud space, blockchain, etc. This paper has comprehensively conducted an extensive review of the applications of big data analytics along with the prevailing challenges and solutions

    Situation recognition using soft computing techniques

    Get PDF
    Includes bibliographical references.The last decades have witnessed the emergence of a large number of devices pervasively launched into our daily lives as systems producing and collecting data from a variety of information sources to provide different services to different users via a variety of applications. These include infrastructure management, business process monitoring, crisis management and many other system-monitoring activities. Being processed in real-time, these information production/collection activities raise an interest for live performance monitoring, analysis and reporting, and call for data-mining methods in the recognition, prediction, reasoning and controlling of the performance of these systems by controlling changes in the system and/or deviations from normal operation. In recent years, soft computing methods and algorithms have been applied to data mining to identify patterns and provide new insight into data. This thesis revisits the issue of situation recognition for systems producing massive datasets by assessing the relevance of using soft computing techniques for finding hidden pattern in these systems
    • …
    corecore