1,827 research outputs found

    Data analysis and machine learning approaches for time series pre- and post- processing pipelines

    Get PDF
    157 p.En el ámbito industrial, las series temporales suelen generarse de forma continua mediante sensores quecaptan y supervisan constantemente el funcionamiento de las máquinas en tiempo real. Por ello, esimportante que los algoritmos de limpieza admitan un funcionamiento casi en tiempo real. Además, amedida que los datos evolución, la estrategia de limpieza debe cambiar de forma adaptativa eincremental, para evitar tener que empezar el proceso de limpieza desde cero cada vez.El objetivo de esta tesis es comprobar la posibilidad de aplicar flujos de aprendizaje automática a lasetapas de preprocesamiento de datos. Para ello, este trabajo propone métodos capaces de seleccionarestrategias óptimas de preprocesamiento que se entrenan utilizando los datos históricos disponibles,minimizando las funciones de perdida empíricas.En concreto, esta tesis estudia los procesos de compresión de series temporales, unión de variables,imputación de observaciones y generación de modelos subrogados. En cada uno de ellos se persigue laselección y combinación óptima de múltiples estrategias. Este enfoque se define en función de lascaracterísticas de los datos y de las propiedades y limitaciones del sistema definidas por el usuario

    Distributed cloud-edge analytics and machine learning for transportation emissions estimation

    Get PDF
    (English) In recent years IoT and Smart Cities have become a popular paradigm of computing that is based on network-enabled devices connected providing different functionalities, from sensor measures to domotic actions. With this paradigm, it is possible to provide to the stakeholders near-realtime information of the field, e.g. the current pollution of the city. Along with the mentioned paradigms, Fog Computing enables computation near the sensors where the data is produced, i.e. Edge nodes. This paradigm provides low latency and fault tolerance given the possible independence of the sensor devices. Moreover, pushing this computation enables derived results in a near-realtime fashion. This ability to push the computation to where the data is produced can be beneficial in many situations, however it also requires to include in the Edge the data preparation processes that ensure the fitness for use of the data as the incoming data can be erroneous. Given this situation, Machine Learning can be useful to correct data and also to produce predictions of the future values. Even though there have been studies regarding on the uses of data at the Edge, to our knowledge there is no evaluation of the different modeling situations and the viability of the approach. Therefore, this thesis aims to evaluate the possibility of building a distributed system that ensures the fitness for use of the incoming data through Machine Learning enabled Data Preparation, estimates the emissions and predicts the future status of the city in a near-realtime fashion. We evaluate the viability through three contributions. The first contribution focuses on forecasting in a distributed scenario with road traffic dataset for evaluation. It provides a robust solution to build a central model. This approach is based on Federated Learning, which allows training models at the Edge nodes and then merging them centrally. This way the models in the Edge can be independent but also can be synchronized. The results show the trade-off between accuracy versions training time and a comparison between low-powered devices versus server-class machines. These analyses show that it is viable to use Machine Learning with this paradigm. The second contribution focuses on a particular use case of ship emission estimation. To estimate exhaust emissions data must be correct, which is not always the case. This contribution explores the different techniques available to correct ship registry data and proposes the usage of simple Machine Learning techniques to do imputation of missing or erroneous values. This contribution analyzes the different variables and their relationship to provide the practitioners with guidelines for correction and data treatment. The results show that with classical Machine Learning it is possible to improve the state-of-the-art results. Moreover, as these algorithms are simple enough, they can be used in an Edge device if required. The third contribution focuses on generating new variables from the ones available with a ship trace dataset obtained from the Automatic Identification System (AIS). We use a pipeline of two different methods, a Neural Networks and a clustering algorithm, to group movements into movement patterns or \emph{behaviors}. We test the predicting power of these behaviors to predict ship type, main engine power, and navigational status. The prediction of the main engine power is compared against the standard technique used in ship emission estimation when the ship registry is missing. Our approach was able to detect 45\% of the otherwise undetected emissions if the baseline method was to be used. As ship navigational status is prone to error, the behaviors found are proposed as an alternative variable based in robust data. These contributions build a framework that can distribute the learning processes and that resists network failures in low-powered devices.(Español) En los últimos años, IoT y las Smart Cities se han convertido en un paradigma popular de computación que se basa en dispositivos conectados a la red que proporcionan diferentes funcionalidades, desde medidas de sensores hasta acciones domóticas. Con este paradigma, es posible tener información en casi tiempo real, como por ejemplo la contaminación actual de la ciudad. Junto con los paradigmas mencionados, Fog Computing permite computar cerca de donde se producen los datos, es decir, los nodos Edge. Este paradigma proporciona baja latencia y tolerancia a fallos dada la posible independencia de los dispositivos sensores. Esta posibilidad puede ser beneficiosa en muchas situaciones, sin embargo, requiere incluir en el Edge los procesos de preparación de datos que aseguran la idoneidad para su uso, ya que los datos entrantes pueden ser erróneos. Ante esta situación, el Machine Learning es útil para corregir datos y también para producir predicciones de los valores futuros. A pesar de que se han realizado estudios sobre los usos de los datos en el Edge, hasta donde sabemos, no hay una evaluación de las diferentes situaciones de modelado y la viabilidad del enfoque. Por lo tanto, esta tesis tiene como objetivo evaluar la posibilidad de construir un sistema distribuido que garantice que los datos sean correctos a través de su preparación con Machine Learning. También el sistema deberá estimar las emisiones y predecir el estado futuro de la ciudad de una manera casi en tiempo real. La viabilidad se evalúa a través a través de tres contribuciones. La primera contribución se centra en escenario distribuido con un conjunto de datos de tráfico vial que proporciona una solución robusta para construir un modelo central. Este enfoque se basa en Federated Learning, que permite entrenar modelos en los nodos Edge y luego fusionarlos de forma centralizada. De esta manera, los modelos en el Edge pueden ser independientes, pero también se pueden sincronizar. Los resultados muestran la comparación de la precisión con un modelo central y uno distribuido y una comparación con dispositivos de bajo consumos contra servidores. Estos análisis muestran que es viable utilizar el Machine Learning en este paradigma. La segunda contribución se centra en un caso de uso particular de estimación de las emisiones de barcos. Para estimar las emisiones, los datos deben ser correctos, cosa que no siempre pasa. Esta contribución explora las diferentes técnicas disponibles para corregir los datos del registro de barcos y propone el uso de técnicas simples de Machine Learning para hacer imputación de valores faltantes o erróneos. Esta contribución analiza las diferentes variables y su relación para proporcionar a los profesionales pautas para la corrección y el tratamiento de datos. Los resultados muestran que con el Machine Learning clásico es posible mejorar los resultados frente a métodos del estado del arte. Además, como estos algoritmos son lo suficientemente simples como para poder utilizarse en dispositivos Edge. La tercera contribución se centra en generar nuevas variables a partir de las disponibles con un conjunto de datos de trazabilidad de barcos obtenido del Sistema AIS. Esto se hace utilizando en conjunto una red neuronal y un algoritmo de agrupación para agrupar los movimientos en patrones de movimiento o comportamientos. Se evalúa su funcionamiento para predecir el tipo de barco, la potencia del motor principal y el estado de navegación. Con esta predicción, nuestro sistema es capaz de detectar el 45% de las emisiones que no se detectan con métodos standard. Como el estado de navegación del barco es propenso a errores, los comportamientos encontrados se proponen como una variable alternativa basada en datos robustos. Estas contribuciones constituyen un marco para distribuir los procesos de aprendizaje y que resiste errores en la red con dispositivos de bajo consumo.Arquitectura de computador

    Real-time data-driven missing data imputation for short-term sensor data of marine systems. A comparative study

    Get PDF
    In the maritime industry, sensors are utilised to implement condition-based maintenance (CBM) to assist decision-making processes for energy efficient operations of marine machinery. However, the employment of sensors presents several challenges including the imputation of missing values. Data imputation is a crucial pre-processing step, the aim of which is the estimation of identified missing values to avoid under-utilisation of data that can lead to biased results. Although various studies have been developed on this topic, none of the studies so far have considered the option of imputing incomplete values in real-time to assist instant data-driven decision-making strategies. Hence, a methodological comparative study has been developed that examines a total of 20 widely implemented machine learning and time series forecasting algorithms. Moreover, a case study on a total of 7 machinery system parameters obtained from sensors installed on a cargo vessel is utilised to highlight the implementation of the proposed methodology. To assess the models’ performance seven metrics are estimated (Execution time, MSE, MSLE, RMSE, MAPE, MedAE, Max Error). In all cases, ARIMA outperforms the remaining models, yielding a MedAE of 0.08 r/min and a Max Error of 2.4 r/min regarding the main engine rotational speed paramete

    Graph signal reconstruction techniques for IoT air pollution monitoring platforms

    Get PDF
    Air pollution monitoring platforms play a very important role in preventing and mitigating the effects of pollution. Recent advances in the field of graph signal processing have made it possible to describe and analyze air pollution monitoring networks using graphs. One of the main applications is the reconstruction of the measured signal in a graph using a subset of sensors. Reconstructing the signal using information from neighboring sensors is a key technique for maintaining network data quality, with examples including filling in missing data with correlated neighboring nodes, creating virtual sensors, or correcting a drifting sensor with neighboring sensors that are more accurate. This paper proposes a signal reconstruction framework for air pollution monitoring data where a graph signal reconstruction model is superimposed on a graph learned from the data. Different graph signal reconstruction methods are compared on actual air pollution data sets measuring O3, NO2, and PM10. The ability of the methods to reconstruct the signal of a pollutant is shown, as well as the computational cost of this reconstruction. The results indicate the superiority of methods based on kernel-based graph signal reconstruction, as well as the difficulties of the methods to scale in an air pollution monitoring network with a large number of low-cost sensors. However, we show that the scalability of the framework can be improved with simple methods, such as partitioning the network using a clustering algorithm.This work is supported by the National Spanish funding PID2019-107910RB-I00, by regional project 2017SGR-990, and with the support of Secretaria d’Universitats i Recerca de la Generalitat de Catalunya i del Fons Social Europeu.Peer ReviewedPostprint (author's final draft

    Mind the large gap : novel algorithm using seasonal decomposition and elastic net regression to impute large intervals of missing data in air quality data

    Get PDF
    Air quality data sets are widely used in numerous analyses. Missing values are ubiquitous in air quality data sets as the data are collected through sensors. Recovery of missing data is a challenging task in the data preprocessing stage. This task becomes more challenging in time series data as time is an implicit variable that cannot be ignored. Even though existing methods to deal with missing data in time series perform well in situations where the percentage of missing values is relatively low and the gap size is small, their performances are reasonably lower when it comes to large gaps. This paper presents a novel algorithm based on seasonal decomposition and elastic net regression to impute large gaps of time series data when there exist correlated variables. This method outperforms several other existing univariate approaches namely Kalman smoothing on ARIMA models, Kalman smoothing on structural time series models, linear interpolation, and mean imputation in imputing large gaps. However, this is applicable only when there exists one or more correlated variables with the time series with large gaps
    corecore