60 research outputs found

    Energy consumption forecasting: a proposed framework

    Get PDF
    With the development of underdeveloped countries and the digitization of societies, energy consumption is expected to continue to show high growth in the coming decades. While there is still a strong focus on fossil fuels for energy generation, the implementation of energy policies is crucial to gradually shift to renewable sources and the consequent reduction in CO2 emissions. Buildings are currently the sector that consumes the most energy. To contribute for a better energy consumption efficiency, it was proposed a framework, to be applied to buildings or households, to allow users to know their energy consumption and the possibility to forecast it. Different data analysis techniques for time series were used to provide information to the user about their energy consumption as well as to validate important data characteristics, namely stationarity and the existence of seasonality, which can have an impact in the forecasting models. For the definition of the forecasting models, state of the art was done to identify used models for energy consumption forecasting, and three models were tested for both types of data, univariate and multivariate. For the univariate data, the tested models were SARIMA, Holt-Winters and LSTM as for the multivariate data, SARIMA with exogenous variables, Support Vector Regression and LSTM. After the first execution of each model, hyperparameter tuning was done to conclude on the improvement of the results and the robustness of the models for later application to the framework.Com o desenvolvimento de países subdesenvolvidos e a digitalização das sociedades, é esperado que o consumo de energia continue a apresentar um crescimento elevado nas próximas décadas. Existindo ainda um grande foco em fontes fósseis para a geração de energia, a implementação de políticas energéticas são cruciais para a mudança gradual para energias renováveis e consequente redução de emissões de CO2. Edifícios são atualmente o sector que mais energia consomem. De forma a contribuir para uma melhor eficiência no consumo de energia foi proposta uma framework, a aplicar em edifícios ou apartamentos, para possibilitar aos utilizadores ter um conhecimento do seu consumo de energia bem como a previsão desse mesmo consumo. Diferentes técnicas de análise de dados para séries temporais foram utilizadas para proporcionar informação ao utilizador sobre o seu consumo de energia bem como a validação de caraterísticas importantes dos dados, nomeadamente a verificação da estacionariedade e a existência da sazonalidade, que terão impacto no modelo de previsão. Para a definição dos modelos preditivos, foi feita uma revisão de literatura sobre modelos utilizados atualmente para previsão do consumo de energia e testados três modelos para os dois tipos de dados, univariados e multivariados. Para os dados univariados os modelos testados foram SARIMA, Holt-Winters e LSTM e para os dados multivariados SARIMA com variáveis exógenas, Support Vector Regression e LSTM. Após a primeira execução de cada modelo, foi feita uma otimização dos modelos para concluir na melhoria dos resultados previstos e na robustez dos modelos para posterior aplicação na framework

    Steganography in IP telephony

    Get PDF
    Import 22/07/2015Tato diplomová práce se zabývá steganografickými technikami a jejich aplikací v oblasti technologie IP telefonie. Obsahuje popis jednotlivých steganografických metod a technologií, které se používají v rámci IP telefonie. V další části jsou popsány a rozebrány metody pro detekci anomálií, na základě algoritmu Holt-Winters, Holt-Winters:Brutlag a Naive Bayes. Současně se zabývá umístěním steganogramu do hlavičky SIP protokolu, schopnosti jej detekovat pomocí systému pro detekci anomálií a využitím aplikace SIPp pro generování injektovaných zpráv SIP protokolu. Další část práce popisuje steganogra- fickou metodu využívající datový tok RTP protokolu, pro přenos textových informací. Následně jsou zhodnoceny dosažené výsledky.This master thesis concerns with steganographic techniques and their application in the field of IP telephony. It encompasses description of individual steganographic methods and technologies which are used within the scope of IP telephony. The next part analyzes and describes anomaly detection methods based on Holt-Winters, Holt-Winters:Brutlag and Naive Bayes. Simultaneously, concerns with embedding of a steganogram into an SIP protocol header, means of its detection with help of the anomaly detection system and the use of SIPp application for generating injected SIP protocol messages. The next part of thesis describes a steganographic method which uses an RTP protocol data flow for the transfer of text-based informations. The achieved results are evaluated afterwards.440 - Katedra telekomunikační technikyvýborn

    Mukautuva moniulotteisten poikkeavuuksien tunnistaminen reaaliaikaisesti

    Get PDF
    Data volumes are growing at a high speed as data emerges from millions of devices. This brings an increasing need for streaming analytics, processing and analysing the data in a record-by-record manner. In this work a comprehensive literature review on streaming analytics is presented, focusing on detecting anomalous behaviour. Challenges and approaches for streaming analytics are discussed. Different ways of determining and identifying anomalies are shown and a large number of anomaly detection methods for streaming data are presented. Also, existing software platforms and solutions for streaming analytics are presented. Based on the literature survey I chose one method for further investigation, namely Lightweight on-line detector of anomalies (LODA). LODA is designed to detect anomalies in real time from even high-dimensional data. In addition, it is an adaptive method and updates the model on-line. LODA was tested both on synthetic and real data sets. This work shows how to define the parameters used with LODA. I present a couple of improvement ideas to LODA and show that three of them bring important benefits. First, I show a simple addition to handle special cases such that it allows computing an anomaly score for all data points. Second, I show cases where LODA fails due to lack of data preprocessing. I suggest preprocessing schemes for streaming data and show that using them improves the results significantly, and they require only a small subset of the data for determining preprocessing parameters. Third, since LODA only gives anomaly scores, I suggest thresholding techniques to define anomalies. This work shows that the suggested techniques work fairly well compared to theoretical best performance. This makes it possible to use LODA in real streaming analytics situations.Datan määrä kasvaa kovaa vauhtia miljoonien laitteiden tuottaessa dataa. Tämä luo kasvavan tarpeen datan prosessoinnille ja analysoinnille reaaliaikaisesti. Tässä työssä esitetään kattava kirjallisuuskatsaus reaaliaikaisesta analytiikasta keskittyen anomalioiden tunnistukseen. Työssä pohditaan reaaliaikaiseen analytiikkaan liittyviä haasteita ja lähestymistapoja. Työssä näytetään erilaisia tapoja määrittää ja tunnistaa anomalioita sekä esitetään iso joukko menetelmiä reaaliaikaiseen anomalioiden tunnistukseen. Työssä esitetään myös reaaliaika-analytiikkaan tarkoitettuja ohjelmistoalustoja ja -ratkaisuja. Kirjallisuuskatsauksen perusteella työssä on valittu yksi menetelmä lähempään tutkimukseen, nimeltään Lightweight on-line detector of anomalies (LODA). LODA on suunniteltu tunnistamaan anomalioita reaaliaikaisesti jopa korkeaulotteisesta datasta. Lisäksi se on adaptiivinen menetelmä ja päivittää mallia reaaliaikaisesti. Työssä testattiin LODAa sekä synteettisellä että oikealla datalla. Työssä näytetään, miten LODAa käytettäessä kannattaa valita mallin parametrit. Työssä esitetään muutama kehitysehdotus LODAlle ja näytetään kolmen kehitysehdotuksen merkittävä hyöty. Ensinnäkin, näytetään erikoistapauksia varten yksinkertainen lisäys, joka mahdollistaa anomaliapisteytyksen laskemisen jokaiselle datapisteelle. Toiseksi, työssä näytetään tapauksia, joissa LODA epäonnistuu, kun dataa ei ole esikäsitelty. Työssä ehdotetaan reaaliaikaisesti prosessoitavalle datalle soveltuvia esikäsittelymenetelmiä ja osoitetaan, että niiden käyttö parantaa tuloksia merkittävästi, samalla käyttäen vain pientä osaa datasta esikäsittelyparametrien määrittämiseen. Kolmanneksi, koska LODA antaa datapisteille vain anomaliapisteytyksen, työssä on ehdotettu, miten sopivat raja-arvot anomalioiden tunnistukseen voitaisiin määrittää. Työssä on osoitettu, että nämä ehdotukset toimivat melko hyvin verrattuna teoreettisesti parhaaseen mahdolliseen tulokseen. Tämä mahdollistaa LODAn käytön oikeissa reaaliaika-analytiikkatapauksissa

    Data Science for Finance: Targeted Learning from (Big) Data to Economic Stability and Financial Risk Management

    Get PDF
    A thesis submitted in partial fulfillment of the requirements for the degree of Doctor in Information Management, specialization in Statistics and EconometricsThe modelling, measurement, and management of systemic financial stability remains a critical issue in most countries. Policymakers, regulators, and managers depend on complex models for financial stability and risk management. The models are compelled to be robust, realistic, and consistent with all relevant available data. This requires great data disclosure, which is deemed to have the highest quality standards. However, stressed situations, financial crises, and pandemics are the source of many new risks with new requirements such as new data sources and different models. This dissertation aims to show the data quality challenges of high-risk situations such as pandemics or economic crisis and it try to theorize the new machine learning models for predictive and longitudes time series models. In the first study (Chapter Two) we analyzed and compared the quality of official datasets available for COVID-19 as a best practice for a recent high-risk situation with dramatic effects on financial stability. We used comparative statistical analysis to evaluate the accuracy of data collection by a national (Chinese Center for Disease Control and Prevention) and two international (World Health Organization; European Centre for Disease Prevention and Control) organizations based on the value of systematic measurement errors. We combined excel files, text mining techniques, and manual data entries to extract the COVID-19 data from official reports and to generate an accurate profile for comparisons. The findings show noticeable and increasing measurement errors in the three datasets as the pandemic outbreak expanded and more countries contributed data for the official repositories, raising data comparability concerns and pointing to the need for better coordination and harmonized statistical methods. The study offers a COVID-19 combined dataset and dashboard with minimum systematic measurement errors and valuable insights into the potential problems in using databanks without carefully examining the metadata and additional documentation that describe the overall context of data. In the second study (Chapter Three) we discussed credit risk as the most significant source of risk in banking as one of the most important sectors of financial institutions. We proposed a new machine learning approach for online credit scoring which is enough conservative and robust for unstable and high-risk situations. This Chapter is aimed at the case of credit scoring in risk management and presents a novel method to be used for the default prediction of high-risk branches or customers. This study uses the Kruskal-Wallis non-parametric statistic to form a conservative credit-scoring model and to study its impact on modeling performance on the benefit of the credit provider. The findings show that the new credit scoring methodology represents a reasonable coefficient of determination and a very low false-negative rate. It is computationally less expensive with high accuracy with around 18% improvement in Recall/Sensitivity. Because of the recent perspective of continued credit/behavior scoring, our study suggests using this credit score for non-traditional data sources for online loan providers to allow them to study and reveal changes in client behavior over time and choose the reliable unbanked customers, based on their application data. This is the first study that develops an online non-parametric credit scoring system, which can reselect effective features automatically for continued credit evaluation and weigh them out by their level of contribution with a good diagnostic ability. In the third study (Chapter Four) we focus on the financial stability challenges faced by insurance companies and pension schemes when managing systematic (undiversifiable) mortality and longevity risk. For this purpose, we first developed a new ensemble learning strategy for panel time-series forecasting and studied its applications to tracking respiratory disease excess mortality during the COVID-19 pandemic. The layered learning approach is a solution related to ensemble learning to address a given predictive task by different predictive models when direct mapping from inputs to outputs is not accurate. We adopt a layered learning approach to an ensemble learning strategy to solve the predictive tasks with improved predictive performance and take advantage of multiple learning processes into an ensemble model. In this proposed strategy, the appropriate holdout for each model is specified individually. Additionally, the models in the ensemble are selected by a proposed selection approach to be combined dynamically based on their predictive performance. It provides a high-performance ensemble model to automatically cope with the different kinds of time series for each panel member. For the experimental section, we studied more than twelve thousand observations in a portfolio of 61-time series (countries) of reported respiratory disease deaths with monthly sampling frequency to show the amount of improvement in predictive performance. We then compare each country’s forecasts of respiratory disease deaths generated by our model with the corresponding COVID-19 deaths in 2020. The results of this large set of experiments show that the accuracy of the ensemble model is improved noticeably by using different holdouts for different contributed time series methods based on the proposed model selection method. These improved time series models provide us proper forecasting of respiratory disease deaths for each country, exhibiting high correlation (0.94) with Covid-19 deaths in 2020. In the fourth study (Chapter Five) we used the new ensemble learning approach for time series modeling, discussed in the previous Chapter, accompany by K-means clustering for forecasting life tables in COVID-19 times. Stochastic mortality modeling plays a critical role in public pension design, population and public health projections, and in the design, pricing, and risk management of life insurance contracts and longevity-linked securities. There is no general method to forecast the mortality rate applicable to all situations especially for unusual years such as the COVID-19 pandemic. In this Chapter, we investigate the feasibility of using an ensemble of traditional and machine learning time series methods to empower forecasts of age-specific mortality rates for groups of countries that share common longevity trends. We use Generalized Age-Period-Cohort stochastic mortality models to capture age and period effects, apply K-means clustering to time series to group countries following common longevity trends, and use ensemble learning to forecast life expectancy and annuity prices by age and sex. To calibrate models, we use data for 14 European countries from 1960 to 2018. The results show that the ensemble method presents the best robust results overall with minimum RMSE in the presence of structural changes in the shape of time series at the time of COVID-19. In this dissertation’s conclusions (Chapter Six), we provide more detailed insights about the overall contributions of this dissertation on the financial stability and risk management by data science, opportunities, limitations, and avenues for future research about the application of data science in finance and economy

    Advanced Data Analytics Methodologies for Anomaly Detection in Multivariate Time Series Vehicle Operating Data

    Get PDF
    Early detection of faults in the vehicle operating systems is a research domain of high significance to sustain full control of the systems since anomalous behaviors usually result in performance loss for a long time before detecting them as critical failures. In other words, operating systems exhibit degradation when failure begins to occur. Indeed, multiple presences of the failures in the system performance are not only anomalous behavior signals but also show that taking maintenance actions to keep the system performance is vital. Maintaining the systems in the nominal performance for the lifetime with the lowest maintenance cost is extremely challenging and it is important to be aware of imminent failure before it arises and implement the best countermeasures to avoid extra losses. In this context, the timely anomaly detection of the performance of the operating system is worthy of investigation. Early detection of imminent anomalous behaviors of the operating system is difficult without appropriate modeling, prediction, and analysis of the time series records of the system. Data based technologies have prepared a great foundation to develop advanced methods for modeling and prediction of time series data streams. In this research, we propose novel methodologies to predict the patterns of multivariate time series operational data of the vehicle and recognize the second-wise unhealthy states. These approaches help with the early detection of abnormalities in the behavior of the vehicle based on multiple data channels whose second-wise records for different functional working groups in the operating systems of the vehicle. Furthermore, a real case study data set is used to validate the accuracy of the proposed prediction and anomaly detection methodologies

    Forecasting monthly airline passenger numbers with small datasets using feature engineering and a modified principal component analysis

    Get PDF
    In this study, a machine learning approach based on time series models, different feature engineering, feature extraction, and feature derivation is proposed to improve air passenger forecasting. Different types of datasets were created to extract new features from the core data. An experiment was undertaken with artificial neural networks to test the performance of neurons in the hidden layer, to optimise the dimensions of all layers and to obtain an optimal choice of connection weights – thus the nonlinear optimisation problem could be solved directly. A method of tuning deep learning models using H2O (which is a feature-rich, open source machine learning platform known for its R and Spark integration and its ease of use) is also proposed, where the trained network model is built from samples of selected features from the dataset in order to ensure diversity of the samples and to improve training. A successful application of deep learning requires setting numerous parameters in order to achieve greater model accuracy. The number of hidden layers and the number of neurons, are key parameters in each layer of such a network. Hyper-parameter, grid search, and random hyper-parameter approaches aid in setting these important parameters. Moreover, a new ensemble strategy is suggested that shows potential to optimise parameter settings and hence save more computational resources throughout the tuning process of the models. The main objective, besides improving the performance metric, is to obtain a distribution on some hold-out datasets that resemble the original distribution of the training data. Particular attention is focused on creating a modified version of Principal Component Analysis (PCA) using a different correlation matrix – obtained by a different correlation coefficient based on kinetic energy to derive new features. The data were collected from several airline datasets to build a deep prediction model for forecasting airline passenger numbers. Preliminary experiments show that fine-tuning provides an efficient approach for tuning the ultimate number of hidden layers and the number of neurons in each layer when compared with the grid search method. Similarly, the results show that the modified version of PCA is more effective in data dimension reduction, classes reparability, and classification accuracy than using traditional PCA.</div

    Review of Low Voltage Load Forecasting: Methods, Applications, and Recommendations

    Full text link
    The increased digitalisation and monitoring of the energy system opens up numerous opportunities to decarbonise the energy system. Applications on low voltage, local networks, such as community energy markets and smart storage will facilitate decarbonisation, but they will require advanced control and management. Reliable forecasting will be a necessary component of many of these systems to anticipate key features and uncertainties. Despite this urgent need, there has not yet been an extensive investigation into the current state-of-the-art of low voltage level forecasts, other than at the smart meter level. This paper aims to provide a comprehensive overview of the landscape, current approaches, core applications, challenges and recommendations. Another aim of this paper is to facilitate the continued improvement and advancement in this area. To this end, the paper also surveys some of the most relevant and promising trends. It establishes an open, community-driven list of the known low voltage level open datasets to encourage further research and development.Comment: 37 pages, 6 figures, 2 tables, review pape

    Machine learning-based algorithms to knowledge extraction from time series data: A review

    Get PDF
    To predict the future behavior of a system, we can exploit the information collected in the past, trying to identify recurring structures in what happened to predict what could happen, if the same structures repeat themselves in the future as well. A time series represents a time sequence of numerical values observed in the past at a measurable variable. The values are sampled at equidistant time intervals, according to an appropriate granular frequency, such as the day, week, or month, and measured according to physical units of measurement. In machine learning-based algorithms, the information underlying the knowledge is extracted from the data themselves, which are explored and analyzed in search of recurring patterns or to discover hidden causal associations or relationships. The prediction model extracts knowledge through an inductive process: the input is the data and, possibly, a first example of the expected output, the machine will then learn the algorithm to follow to obtain the same result. This paper reviews the most recent work that has used machine learning-based techniques to extract knowledge from time series data

    The generalization ability of artificial neural networks in forecasting TCP/IP network traffic trends

    Get PDF
    Artificial Neural Networks (ANNs) have been used in many fields for a variety of applications, and proved to be reliable. They have proved to be one of the most powerful tools in the domain of forecasting and analysis of various time series. The forecasting of TCP/IP network traffic is an important issue receiving growing attention from the computer networks. By improving upon this task, efficient network traffic engineering and anomaly detection tools can be created, resulting in economic gains from better resource management. The use of ANNs requires some critical decisions on the part of the user. These decisions, which are mainly concerned with the determinations of the components of the network structure and the parameters defined for the learning algorithm, can significantly affect the ability of the ANN to generalize, i.e. to have the outputs of the ANN approximate target values given inputs that are not in the training set. This has an impact on the quality of forecasts produced by the ANN. Although there are some discussions in the literature regarding the issues that affect network generalization ability, there is no standard method or approach that is universally accepted to determine the optimum values of these parameters for a particular problem. This research examined the impact a selection of key design features has on the generalization ability of ANNs. We examined how the size and composition of the network architecture, the size of the training samples, the choice of learning algorithm, the training schedule and the size of the learning rate both individually and collectively affect the ability of an ANN to learn the training data and to generalize well to novel data. To investigate this matter, we empirically conducted several experiments in forecasting a real world TCP/IP network traffic time series and the network performance validated using an independent test set. MATLAB version 7.4.0.287’s Neural Network toolbox version 5.0.2 (R2007a) was used for our experiments. The results are found to be promising in terms of ease of design and use of ANNs. Our results indicate that in contrast to Occam’s razor principle for a single hidden layer an increase in number of hidden neurons produces a corresponding increase in generalization ability of ANNs, however larger networks do not always improve the generalization ability of ANNs even though an increase in number of hidden neurons results in a concomitant rise in network generalization. Also, contradicting commonly accepted guidelines, networks trained with a larger representation of the data, exhibit better generalization than networks trained on smaller representations, even though the larger networks have a significantly greater capacity. Furthermore, the results obtained indicate that the learning rate, momentum, training schedule and choice of learning algorithm have as much a significant effect on ANN generalization ability. A number of conclusions were drawn from the results and later used to generate a comprehensive set of guidelines that will facilitate the process of design and use of ANNs in TCP/IP network traffic forecasting. The main contribution of this research lies in the identification of optimal strategies for the use of ANNs in forecasting TCP/IP network traffic trends. Although the information obtained from the tests carried out in this research is specific to the problem considered, it provides users of back-propagation networks with a valuable guide on the behaviour of networks under a wide range of operating conditions. It is important to note that the guidelines accrued from this research are of an assistive and not necessarily restrictive nature to potential ANN modellers

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 17th International Annual Conference on Cyber Security, CNCERT 2021, held in Beijing, China, in AJuly 2021. The 14 papers presented were carefully reviewed and selected from 51 submissions. The papers are organized according to the following topical sections: ​data security; privacy protection; anomaly detection; traffic analysis; social network security; vulnerability detection; text classification
    corecore