1,363 research outputs found

    From statistical- to machine learning-based network traffic prediction

    Get PDF
    Nowadays, due to the exponential and continuous expansion of new paradigms such as Internet of Things (IoT), Internet of Vehicles (IoV) and 6G, the world is witnessing a tremendous and sharp increase of network traffic. In such large-scale, heterogeneous, and complex networks, the volume of transferred data, as big data, is considered a challenge causing different networking inefficiencies. To overcome these challenges, various techniques are introduced to monitor the performance of networks, called Network Traffic Monitoring and Analysis (NTMA). Network Traffic Prediction (NTP) is a significant subfield of NTMA which is mainly focused on predicting the future of network load and its behavior. NTP techniques can generally be realized in two ways, that is, statistical- and Machine Learning (ML)-based. In this paper, we provide a study on existing NTP techniques through reviewing, investigating, and classifying the recent relevant works conducted in this field. Additionally, we discuss the challenges and future directions of NTP showing that how ML and statistical techniques can be used to solve challenges of NTP.publishedVersio

    Application of Artificial Neural Networks for Power Load Prediction in Critical Infrastructure: A Comparative Case Study

    Get PDF
    This article aims to assess the effectiveness of state-of-the-art artificial neural network (ANN) models in time series analysis, specifically focusing on their application in prediction tasks of critical infrastructures (CIs). To accomplish this, shallow models with nearly identical numbers of trainable parameters are constructed and examined. The dataset, which includes 120,884 hourly electricity consumption records, is divided into three subsets (25%, 50%, and the entire dataset) to examine the effect of increasing training data. Additionally, the same models are trained and evaluated for univariable and multivariable data to evaluate the impact of including more features. The case study specifically focuses on predicting electricity consumption using load information from Norway. The results of this study confirm that LSTM models emerge as the best-performed model, surpassing other models as data volume and feature increase. Notably, for training datasets ranging from 2000 to 22,000 instances, GRU exhibits superior accuracy, while in the 22,000 to 42,000 range, LSTM and BiLSTM are the best. When the training dataset is within 42,000 to 360,000, LSTM and ConvLSTM prove to be good choices in terms of accuracy. Convolutional-based models exhibit superior performance in terms of computational efficiency. The convolutional 1D univariable model emerges as a standout choice for scenarios where training time is critical, sacrificing only 0.000105 in accuracy while a threefold improvement in training time is gained. For training datasets lower than 22,000, feature inclusion does not enhance any of the ANN model’s performance. In datasets exceeding 22,000 instances, ANN models display no consistent pattern regarding feature inclusion, though LSTM, Conv1D, Conv2D, ConvLSTM, and FCN tend to benefit. BiLSTM, GRU, and Transformer do not benefit from feature inclusion, regardless of the training dataset size. Moreover, Transformers exhibit inefficiency in time series forecasting due to their permutation-invariant self-attention mechanism, neglecting the crucial role of sequence order, as evidenced by their poor performance across all three datasets in this study. These results provide valuable insights into the capabilities of ANN models and their effective usage in the context of CI prediction tasks.publishedVersio

    Predictive Abuse Detection for a PLC Smart Lighting Network Based on Automatically Created Models of Exponential Smoothing

    Get PDF
    One of the basic elements of a Smart City is the urban infrastructure management system, in particular, systems of intelligent street lighting control. However, for their reliable operation, they require special care for the safety of their critical communication infrastructure. This article presents solutions for the detection of different kinds of abuses in network traffic of Smart Lighting infrastructure, realized by Power Line Communication technology. Both the structure of the examined Smart Lighting network and its elements are described. The article discusses the key security problems which have a direct impact on the correct performance of the Smart Lighting critical infrastructure. In order to detect an anomaly/attack, we proposed the usage of a statistical model to obtain forecasting intervals. Then, we calculated the value of the differences between the forecast in the estimated traffic model and its real variability so as to detect abnormal behavior (which may be symptomatic of an abuse attempt). Due to the possibility of appearance of significant fluctuations in the real network traffic, we proposed a procedure of statistical models update which is based on the criterion of interquartile spacing. The results obtained during the experiments confirmed the effectiveness of the presented misuse detection method

    Infrastructure systems modeling using data visualization and trend extraction

    Get PDF
    “Current infrastructure systems modeling literature lacks frameworks that integrate data visualization and trend extraction needed for complex systems decision making and planning. Critical infrastructures such as transportation and energy systems contain interdependencies that cannot be properly characterized without considering data visualization and trend extraction. This dissertation presents two case analyses to showcase the effectiveness and improvements that can be made using these techniques. Case one examines flood management and mitigation of disruption impacts using geospatial characteristics as part of data visualization. Case two incorporates trend analysis and sustainability assessment into energy portfolio transitions. Four distinct contributions are made in this work and divided equally across the two cases. The first contribution identifies trends and flood characteristics that must be included as part of model development. The second contribution uses trend extraction to create a traffic management data visualization system based on the flood influencing factors identified. The third contribution creates a data visualization framework for energy portfolio analysis using a genetic algorithm and fuzzy logic. The fourth contribution develops a sustainability assessment model using trend extraction and time series forecasting of state-level electricity generation in a proposed transition setting. The data visualization and trend extraction tools developed and validated in this research will improve strategic infrastructure planning effectiveness”--Abstract, page iv

    SGA Model for Prediction in Cloud Environment

    Get PDF
    With virtual information, cloud computing has made applications available to users everywhere. Efficient asset workload forecasting could help the cloud achieve maximum resource utilisation. The effective utilization of resources and the reduction of datacentres power both depend heavily on load forecasting. The allocation of resources and task scheduling issues in clouds and virtualized systems are significantly impacted by CPU utilisation forecast. A resource manager uses utilisation projection to distribute workload between physical nodes, improving resource consumption effectiveness. When performing a virtual machine distribution job, a good estimation of CPU utilization enables the migration of one or more virtual servers, preventing the overflow of the real machineries. In a cloud system, scalability and flexibility are crucial characteristics. Predicting workload and demands would aid in optimal resource utilisation in a cloud setting. To improve allocation of resources and the effectiveness of the cloud service, workload assessment and future workload forecasting could be performed. The creation of an appropriate statistical method has begun. In this study, a simulation approach and a genetic algorithm were used to forecast workloads. In comparison to the earlier techniques, it is anticipated to produce results that are superior by having a lower error rate and higher forecasting reliability. The suggested method is examined utilizing statistics from the Bit brains datacentres. The study then analyses, summarises, and suggests future study paths in cloud environments

    Assessment of an Adaptive Load Forecasting Methodology in a Smart Grid Demonstration Project

    Get PDF
    This paper presents the implementation of an adaptive load forecasting methodology in two different power networks from a smart grid demonstration project deployed in the region of Madrid, Spain. The paper contains an exhaustive comparative study of different short-term load forecast methodologies, addressing the methods and variables that are more relevant to be applied for the smart grid deployment. The evaluation followed in this paper suggests that the performance of the different methods depends on the conditions of the site in which the smart grid is implemented. It is shown that some non-linear methods, such as support vector machine with a radial basis function kernel and extremely randomized forest offer good performance using only 24 lagged load hourly values, which could be useful when the amount of data available is limited due to communication problems in the smart grid monitoring system. However, it has to be highlighted that, in general, the behavior of different short-term load forecast methodologies is not stable when they are applied to different power networks and that when there is a considerable variability throughout the whole testing period, some methods offer good performance in some situations, but they fail in others. In this paper, an adaptive load forecasting methodology is proposed to address this issue improving the forecasting performance through iterative optimization: in each specific situation, the best short-term load forecast methodology is chosen, resulting in minimum prediction errors.This work has been partly funded by the Spanish Ministry of Economy and Competitiveness through the National Program for Research Aimed at the Challenges of Society under the project OSIRIS (RTC-2014-1556-3). The authors would like to thank all of the partners in the OSIRIS project: Unión Fenosa Distribución S.A., Tecnalia, Orbis , Neoris, Ziv Metering Solutions, Telecontrol STM and Universidad Carlos III de Madrid. The authors would also like to thank Charalampos Chelmis (University at Albany-SUNY) for the valuable discussion

    EVA: Emergency Vehicle Allocation

    Get PDF
    Emergency medicine plays a critical role in the development of a community, where the goal is to provide medical assistance in the shortest possible time. Consequently, the systems that support emergency operations need to be robust, efficient, and effective when managing the limited resources at their disposal. To achieve this, operators analyse historical data in search of patterns present in past occurrencesthat could help predict future call volume. This is a time consuming and very complex task that could be solved by the usage of machine learning solutions, which have been performed appropriately in the context of time series forecasting. Only after the future demands are known, the optimization of the distribution of available assets can be done, for the purpose of supporting high-density zones. The current works aim to propose an integrated system capable of supporting decision-making emergency operations in a real-time environment by allocating a set of available units within a service area based on hourly call volume predictions. The suggested system architecture employs a microservices approach along with event-based communications to enable real-time interactions between every component. This dissertation focuses on call volume forecasting and optimizing allocation components. A combination of traditional time series and deep learning models was used to model historical data from Virginal Beach emergency calls between the years 2010 and 2018, combined with several other features such as weather-related information. Deep learning solutions offered better error metrics, with WaveNet having an MAE value of 0.04. Regarding optimizing emergency vehicle location, the proposed solution is based on a Linear Programming problem to minimize the number of vehicles in each station, with a neighbour mechanism, entitled EVALP-NM, to add a buffer to stations near a high-density zone. This solution was also compared against a Genetic Algorithm that performed significantly worse in terms of execution time and outcomes. The performance of EVALP-NM was tested against simulations with different settings like the number of zones, stations, and ambulances.A medicina de emergência desempenha um papel fundamental no desenvolvimento da Sociedade, onde o objetivo é prestar assistência médica no menor tempo possível. Consequentemente, os sistemas que apoiam as operações de emergência precisam de ser robustos, eficientes e eficazes na gestão dos recursos limitados. Para isso, são analisados dados históricos no intuito de encontrar padrões em ocorrências passadas que possam ajudar a prever o volume futuro de chamadas. Esta é uma tarefa demorada e muito complexa que poderia ser resolvida com o uso de soluções de Machine Learning, que têm funcionado adequadamente no contexto da previsão de séries temporais. Só depois de conhecida a demanda futura poderá ser feita a otimização da distribuição dos recursos disponíveis, com o objetivo de suportar zonas de elevada densidade populacional. O presente trabalho tem como objetivo propor um sistema integrado capaz de apoiar a tomada de decisão em operações de emergência num ambiente de tempo real, atribuindo um conjunto de unidades disponíveis dentro de uma área de serviço com base em previsões volume de chamadas a cada hora. A arquitetura de sistema sugerida emprega uma abordagem de microserviços juntamente com comunicações baseadas em eventos para permitir interações em tempo real entre os componentes. Esta dissertação centra se nos componentes de previsão do volume de chamadas e otimização da atribuição. Foram usados modelos de séries temporais tradicionais e Deep Learning para modelar dados históricos de chamadas de emergência de Virginal Beach entre os anos de 2010 e 2018, combinadas com informações relacionadas ao clima. As soluções de Deep Learning ofereceram melhores métricas de erro, com WaveNet a ter um valor MAE de 0,04. No que diz respeito à otimização da localização dos veículos de emergência, a solução proposta baseia-se num problema de Programação Linear para minimizar o número de veículos em cada estação, com um mecanismo de vizinho, denominado EVALP-NM, para adicionar unidades adicionais às estações próximas de uma zona de alta densidade de chamadas. Esta solução foi comparada com um algoritmo genético que teve um desempenho significativamente pior em termos de tempo de execução e resultados. O desempenho do EVALP-NM foi testado em simulações com configurações diferentes, como número de zonas, estações e ambulâncias

    RHAS: robust hybrid auto-scaling for web applications in cloud computing

    Get PDF
    corecore