637 research outputs found
Contributions to time series analysis, modelling and forecasting to increase reliability in industrial environments.
356 p.La integración del Internet of Things en el sector industrial es clave para alcanzar la inteligencia empresarial. Este estudio se enfoca en mejorar o proponer nuevos enfoques para aumentar la confiabilidad de las soluciones de IA basadas en datos de series temporales en la industria. Se abordan tres fases: mejora de la calidad de los datos, modelos y errores. Se propone una definición estándar de métricas de calidad y se incluyen en el paquete dqts de R. Se exploran los pasos del modelado de series temporales, desde la extracción de características hasta la elección y aplicación del modelo de predicción más eficiente. El método KNPTS, basado en la búsqueda de patrones en el histórico, se presenta como un paquete de R para estimar datos futuros. Además, se sugiere el uso de medidas elásticas de similitud para evaluar modelos de regresión y la importancia de métricas adecuadas en problemas de clases desbalanceadas. Las contribuciones se validaron en casos de uso industrial de diferentes campos: calidad de producto, previsión de consumo eléctrico, detección de porosidad y diagnóstico de máquinas
The impact of macroeconomic leading indicators on inventory management
Forecasting tactical sales is important for long term decisions such as procurement and informing lower level inventory management decisions. Macroeconomic indicators have been shown to improve the forecast accuracy at tactical level, as these indicators can provide early warnings of changing markets while at the same time tactical sales are sufficiently aggregated to facilitate the identification of useful leading indicators. Past research has shown that we can achieve significant gains by incorporating such information. However, at lower levels, that inventory decisions are taken, this is often not feasible due to the level of noise in the data. To take advantage of macroeconomic leading indicators at this level we need to translate the tactical forecasts into operational level ones. In this research we investigate how to best assimilate top level forecasts that incorporate such exogenous information with bottom level (at Stock Keeping Unit level) extrapolative forecasts. The aim is to demonstrate whether incorporating these variables has a positive impact on bottom level planning and eventually inventory levels. We construct appropriate hierarchies of sales and use that structure to reconcile the forecasts, and in turn the different available information, across levels. We are interested both at the point forecast and the prediction intervals, as the latter inform safety stock decisions. Therefore the contribution of this research is twofold. We investigate the usefulness of macroeconomic leading indicators for SKU level forecasts and alternative ways to estimate the variance of hierarchically reconciled forecasts. We provide evidence using a real case study
Recommended from our members
An Assessment of PIER Electric Grid Research 2003-2014 White Paper
This white paper describes the circumstances in California around the turn of the 21st century that led the California Energy Commission (CEC) to direct additional Public Interest Energy Research funds to address critical electric grid issues, especially those arising from integrating high penetrations of variable renewable generation with the electric grid. It contains an assessment of the beneficial science and technology advances of the resultant portfolio of electric grid research projects administered under the direction of the CEC by a competitively selected contractor, the University of California’s California Institute for Energy and the Environment, from 2003-2014
JTIT
kwartalni
Forecasting: theory and practice
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts.
We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.info:eu-repo/semantics/publishedVersio
Forecasting: theory and practice
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases
Recommended from our members
Deep learning driven data analytics for smart grids
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonAs advanced metering infrastructure (AMI) and wide area monitoring systems (WAMSs) are being deployed rapidly and widely, the conventional power grid is transitioning towards the smart grid at an increasing speed. A number of smart metering devices and real-time monitoring systems are capable to generate a huge volume of data on a daily basis. However, a variety of generated data can be made full use of to advance the development of the smart grid through big data analytics, especially, deep learning. Thus, the thesis is focused on data analysis for smart grids from three different aspects.
Firstly, a real-time data driven event detection method is presented, which is quite robust when dealing with corrupted and significantly noisy data of phase measurement units (PMUs). To be specific, the presented event detection method is based on a novel combination of random matrix theory (RMT) and Kalman filtering. Furthermore, a dynamic Kalman filtering technique is proposed through the adjustment of the measurement noise covariance matrix as the data conditioner of the presented method in order to condition PMU data. The experimental results show that the presented method is indeed quite robust in such practical situations that include significant levels of noisy or missing PMU data.
Secondly, a short-term residential load forecasting method is proposed on the basis of deep learning and k-means clustering, which is capable to extract similarity of residential load effectively and perform prediction accurately at the individual residential level. Specifically, it makes full use of k-means clustering to extract similarity among residential load and deep learning to extract complex patterns of residential load. In addition, in order to improve the forecasting accuracy, a comprehensive feature expression strategy is utilised to describe load characteristics of each time step in detail. The experimental results suggest that the proposed method can achieve a high forecasting accuracy in terms of both root mean square error (RMSE) and mean absolute error (MAE).
Thirdly, an online individual residential load forecasting method is developed based on a combination of deep learning and dynamic mirror descent (DMD), which is able to predict residential load in real time and adjust the prediction error over time in order to improve the prediction performance. More specifically, it firstly employs a long short term memory (LSTM) network to build a prediction model offline, and then applies it online with DMD correcting the prediction error. In order to increase the prediction accuracy, a comprehensive feature expression strategy is used to describe load characteristics at each time step in detail. The experimental results indicate that the developed method can obtain a high prediction accuracy in terms of both RMSE and MAE.
To sum up, the proposed real-time event detection method contributes to the monitoring and operation of smart grids, while the proposed residential load forecasting methods contribute to the demand side response in smart grids.TDX-ASSIS
Forecasting: theory and practice
Forecasting has always been in the forefront of decision making and planning.
The uncertainty that surrounds the future is both exciting and challenging,
with individuals and organisations seeking to minimise risks and maximise
utilities. The lack of a free-lunch theorem implies the need for a diverse set
of forecasting methods to tackle an array of applications. This unique article
provides a non-systematic review of the theory and the practice of forecasting.
We offer a wide range of theoretical, state-of-the-art models, methods,
principles, and approaches to prepare, produce, organise, and evaluate
forecasts. We then demonstrate how such theoretical concepts are applied in a
variety of real-life contexts, including operations, economics, finance,
energy, environment, and social good. We do not claim that this review is an
exhaustive list of methods and applications. The list was compiled based on the
expertise and interests of the authors. However, we wish that our encyclopedic
presentation will offer a point of reference for the rich work that has been
undertaken over the last decades, with some key insights for the future of the
forecasting theory and practice
Técnicas big data para el procesamiento de flujos de datos masivos en tiempo real
Programa de Doctorado en Biotecnología, Ingeniería y Tecnología QuímicaLínea de Investigación: Ingeniería, Ciencia de Datos y BioinformáticaClave Programa: DBICódigo Línea: 111Machine learning techniques have become one of the most demanded resources by companies due to the large volume of data that surrounds us in these days. The main objective of these technologies is to solve complex problems in an automated way using data. One of the current perspectives of machine learning is the analysis of continuous flows of data or data streaming. This approach is increasingly requested by enterprises as a result of the large number of information sources producing time-indexed data at high frequency, such as sensors, Internet of Things devices, social networks, etc. However, nowadays, research is more focused on the study of historical data than on data received in streaming. One of the main reasons for this is the enormous challenge that this type of data presents for the modeling of machine learning algorithms.
This Doctoral Thesis is presented in the form of a compendium of publications with a total of 10 scientific contributions in International Conferences and journals with high impact index in the Journal Citation Reports (JCR). The research developed during the PhD Program focuses on the study and analysis of real-time or streaming data through the development of new machine learning algorithms. Machine learning algorithms for real-time data consist of a different type of modeling than the traditional one, where the model is updated online to provide accurate responses in the shortest possible time. The main objective of this Doctoral Thesis is the contribution of research value to the scientific community through three new machine learning algorithms. These algorithms are big data techniques and two of them work with online or streaming data. In this way, contributions are made to the development of one of the current trends in Artificial Intelligence.
With this purpose, algorithms are developed for descriptive and predictive tasks, i.e., unsupervised and supervised learning, respectively. Their common idea is the discovery of patterns in the data.
The first technique developed during the dissertation is a triclustering algorithm to produce three-dimensional data clusters in offline or batch mode. This big data algorithm is called bigTriGen. In a general way, an evolutionary metaheuristic is used to search for groups of data with similar patterns. The model uses genetic operators such as selection, crossover, mutation or evaluation operators at each iteration. The goal of the bigTriGen is to optimize the evaluation function to achieve triclusters of the highest possible quality. It is used as the basis for the second technique implemented during the Doctoral Thesis.
The second algorithm focuses on the creation of groups over three-dimensional data received in real-time or in streaming. It is called STriGen. Streaming modeling is carried out starting from an offline or batch model using historical data. As soon as this model is created, it starts receiving data in real-time. The model is updated in an online or streaming manner to adapt to new streaming patterns. In this way, the STriGen is able to detect concept drifts and incorporate them into the model as quickly as possible, thus producing triclusters in real-time and of good quality.
The last algorithm developed in this dissertation follows a supervised learning approach for time series forecasting in real-time. It is called StreamWNN. A model is created with historical data based on the k-nearest neighbor or KNN algorithm. Once the model is created, data starts to be received in real-time. The algorithm provides real-time predictions of future data, keeping the model always updated in an incremental way and incorporating streaming patterns identified as novelties. The StreamWNN also identifies anomalous data in real-time allowing this feature to be used as a security measure during its application.
The developed algorithms have been evaluated with real data from devices and sensors. These new techniques have demonstrated to be very useful, providing meaningful triclusters and accurate predictions in real time.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e informátic
Optimising power flow in a volatile electrical grid using a message passing algorithm
Current methods of optimal power flow were not designed to handle increasing level of volatility in the electrical networks, this thesis suggests that a message passing-based approach could be useful for managing power distribution in electricity networks. This thesis shows the adaptability of message passing algorithms and demonstrates and validates its capabilities in addressing scenarios with inherent fluctuations, in minimising load shedding and generation costs, and in limiting voltages. Results are promising but more work is needed for this to be practical to real networks
- …