264 research outputs found

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    Machine learning for the sustainable energy transition: a data-driven perspective along the value chain from manufacturing to energy conversion

    Get PDF
    According to the special report Global Warming of 1.5 °C of the IPCC, climate action is not only necessary but more than ever urgent. The world is witnessing rising sea levels, heat waves, events of flooding, droughts, and desertification resulting in the loss of lives and damage to livelihoods, especially in countries of the Global South. To mitigate climate change and commit to the Paris agreement, it is of the uttermost importance to reduce greenhouse gas emissions coming from the most emitting sector, namely the energy sector. To this end, large-scale penetration of renewable energy systems into the energy market is crucial for the energy transition toward a sustainable future by replacing fossil fuels and improving access to energy with socio-economic benefits. With the advent of Industry 4.0, Internet of Things technologies have been increasingly applied to the energy sector introducing the concept of smart grid or, more in general, Internet of Energy. These paradigms are steering the energy sector towards more efficient, reliable, flexible, resilient, safe, and sustainable solutions with huge environmental and social potential benefits. To realize these concepts, new information technologies are required, and among the most promising possibilities are Artificial Intelligence and Machine Learning which in many countries have already revolutionized the energy industry. This thesis presents different Machine Learning algorithms and methods for the implementation of new strategies to make renewable energy systems more efficient and reliable. It presents various learning algorithms, highlighting their advantages and limits, and evaluating their application for different tasks in the energy context. In addition, different techniques are presented for the preprocessing and cleaning of time series, nowadays collected by sensor networks mounted on every renewable energy system. With the possibility to install large numbers of sensors that collect vast amounts of time series, it is vital to detect and remove irrelevant, redundant, or noisy features, and alleviate the curse of dimensionality, thus improving the interpretability of predictive models, speeding up their learning process, and enhancing their generalization properties. Therefore, this thesis discussed the importance of dimensionality reduction in sensor networks mounted on renewable energy systems and, to this end, presents two novel unsupervised algorithms. The first approach maps time series in the network domain through visibility graphs and uses a community detection algorithm to identify clusters of similar time series and select representative parameters. This method can group both homogeneous and heterogeneous physical parameters, even when related to different functional areas of a system. The second approach proposes the Combined Predictive Power Score, a method for feature selection with a multivariate formulation that explores multiple sub-sets of expanding variables and identifies the combination of features with the highest predictive power over specified target variables. This method proposes a selection algorithm for the optimal combination of variables that converges to the smallest set of predictors with the highest predictive power. Once the combination of variables is identified, the most relevant parameters in a sensor network can be selected to perform dimensionality reduction. Data-driven methods open the possibility to support strategic decision-making, resulting in a reduction of Operation & Maintenance costs, machine faults, repair stops, and spare parts inventory size. Therefore, this thesis presents two approaches in the context of predictive maintenance to improve the lifetime and efficiency of the equipment, based on anomaly detection algorithms. The first approach proposes an anomaly detection model based on Principal Component Analysis that is robust to false alarms, can isolate anomalous conditions, and can anticipate equipment failures. The second approach has at its core a neural architecture, namely a Graph Convolutional Autoencoder, which models the sensor network as a dynamical functional graph by simultaneously considering the information content of individual sensor measurements (graph node features) and the nonlinear correlations existing between all pairs of sensors (graph edges). The proposed neural architecture can capture hidden anomalies even when the turbine continues to deliver the power requested by the grid and can anticipate equipment failures. Since the model is unsupervised and completely data-driven, this approach can be applied to any wind turbine equipped with a SCADA system. When it comes to renewable energies, the unschedulable uncertainty due to their intermittent nature represents an obstacle to the reliability and stability of energy grids, especially when dealing with large-scale integration. Nevertheless, these challenges can be alleviated if the natural sources or the power output of renewable energy systems can be forecasted accurately, allowing power system operators to plan optimal power management strategies to balance the dispatch between intermittent power generations and the load demand. To this end, this thesis proposes a multi-modal spatio-temporal neural network for multi-horizon wind power forecasting. In particular, the model combines high-resolution Numerical Weather Prediction forecast maps with turbine-level SCADA data and explores how meteorological variables on different spatial scales together with the turbines' internal operating conditions impact wind power forecasts. The world is undergoing a third energy transition with the main goal to tackle global climate change through decarbonization of the energy supply and consumption patterns. This is not only possible thanks to global cooperation and agreements between parties, power generation systems advancements, and Internet of Things and Artificial Intelligence technologies but also necessary to prevent the severe and irreversible consequences of climate change that are threatening life on the planet as we know it. This thesis is intended as a reference for researchers that want to contribute to the sustainable energy transition and are approaching the field of Artificial Intelligence in the context of renewable energy systems

    Demand Response in Smart Grids

    Get PDF
    The Special Issue “Demand Response in Smart Grids” includes 11 papers on a variety of topics. The success of this Special Issue demonstrates the relevance of demand response programs and events in the operation of power and energy systems at both the distribution level and at the wide power system level. This reprint addresses the design, implementation, and operation of demand response programs, with focus on methods and techniques to achieve an optimized operation as well as on the electricity consumer

    A Combined Numerical and Experimental Approach for Rolling Bearing Modelling and Prognostics

    Get PDF
    Rolling-element bearings are widely employed components which cover a major role in the NVH behaviour of the mechanical systems in which they are inserted. Therefore, it is crucial to thoroughly understand their fundamental properties and accurately quantify their most relevant parameters. Moreover, their inevitable failure due to contact fatigue leads to the necessity of correctly describing their dynamic behaviour. In fact, they permit to develop diagnostic and prognostic schemes, which are heavily requested in the nowadays industrial scenario due to their increasingly important role in the development of efficient maintenance strategies. As a result, throughout the years several techniques have been developed by researchers to address different challenges related to the modelling of these components. Within this context, this thesis aims at improving the available methods and at proposing novel approaches to tackle the modelling of rolling-element bearings both in case of static and dynamic simulations. In particular, the dissertation is divided in three major topics related to this field, i.e. the estimation of bearing radial stiffness trough the finite-element method, the lumped-parameter modelling of defective bearings and the development of physics-based prognostic models. The first part of the thesis deals with the finite-element simulations of rolling-element bearings. In particular, the investigation aims at providing an efficient procedure for the generation of load-dependent meshes. The method is developed with the primary objective of determining the radial stiffness of the examined components. In this regard, the main contribution to the subject is the definition of mesh element dimensions on the basis of analytical formulae and in the proposed methodology for the estimation of bearing stiffness. Then, the second part describes a multi-objective optimization technique for the estimation of unknown parameters in lumped parameter models of defective bearings. In fact, it was observed that several parameters which are commonly inserted in these models are hardly measurable or rather denoted by a high degree of uncertainty. On this basis, an optimization procedure aimed at minimizing the difference between experimental and numerical results is proposed. The novelty of the technique lies in the approach developed to tackle the problem and its peculiar implementation in the context of bearing lumped-parameter models. Lastly, the final part of the dissertation is devoted to the development of physics-based prognostic models. Specifically, two models are detailed, both based on a novel degradation-related parameter, i.e. the Equivalent Damaged Volume (EDV). An algorithm capable of extracting this quantity from experimental data is detailed. Then, EDV values are used as input parameters for two prognostic models. The first one aims at predicting the bearing vibration under different operative conditions with respect to a given reference deterioration history. On the other hand, the objective of the second model is to predict the time until a certain threshold on the equivalent damaged volume is crossed, regardless of the applied load and the shaft rotation speed. Therefore, the original aspect of this latter part is the development of prognostic models based on a novel indicator specifically introduced in this work. Results obtained from all proposed models are validated through analytical methods retrieved from the literature and by comparison with data acquired on a dedicated test bench. To this end, a test rig which was set-up at the Engineering Department of the University of Ferrara was employed to perform two type of tests, i.e. stationary tests on bearings with artificial defects and run-to-failure tests on initially healthy bearings. The characteristics of acceleration signals acquired during both tests are extensively discussed.I cuscinetti a rotolamento sono componenti meccanici che influenzano in maniera considerevole il comportamento dinamico dei sistemi all’interno dei quali sono montati. Pertanto, è di fondamentale importanza possedere strumenti atti alla stima dei loro parametri più rilevanti e avere a disposizione modelli dedicati allo studio delle loro caratteristiche dinamiche. Questo aspetto è di estrema importanza soprattutto nell’ottica dello sviluppo di schemi di diagnostica e prognostica, i quali sono sempre più richiesti all’interno dello scenario industriale odierno. In questo contesto, questa tesi si propone di migliorare le tecniche numeriche già esistenti e di fornire nuovi approcci per la modellazione dei cuscinetti a rotolamento sia nel caso di problemi statici che dinamici. Nello specifico, il lavoro tratta in maniera dettagliata tre diversi argomenti relativi a questo tema, ossia la stima della rigidezza radiale tramite il metodo agli elementi finiti, la modellazione a parametri concentrati di cuscinetti con difetti e lo sviluppo di modelli di prognostica physics-based. La prima parte della tesi concerne la simulazione di cuscinetti a rotolamento tramite il metodo ad elementi finiti. In particolare, lo studio si propone di fornire una procedura per la generazione di griglie le cui dimensioni dipendano dal carico applicato. Il metodo è sviluppato con l’obbiettivo di stimare in maniera computazionalmente efficace la rigidezza radiale del componente in esame. Pertanto, il contributo principale sul tema dato da questa prima parte riguarda il metodo analitico che permette di definire a priori le dimensioni degli elementi che costituiscono la mesh e la metodologia utilizzata per la stima della rigidezza. La seconda parte descrive una procedura di ottimizzazione multi obbiettivo per la stima dei parametri incogniti all’interno dei modelli a parametri concentrati di cuscinetti con difetti. Questa esigenza nasce dall’osservazione che numerosi parametri tipicamente inseriti in questa tipologia di modelli sono difficilmente misurabili oppure caratterizzati da un alto grado di incertezza. Perciò, nella seconda parte viene introdotta una tecnica innovativa che consente di stimare i parametri di modello che minimizzano la differenza fra risultati numerici e sperimentali in diverse condizioni di funzionamento. Infine, l’ultima parte è dedicata allo sviluppo di modelli di prognostica physics-based. A tal riguardo, vengono dettagliati due modelli basati su un nuovo indicatore di degrado del cuscinetto, denominato Equivalent Damaged Volume (EDV). Questo indicatore viene calcolato durante il funzionamento del cuscinetto tramite un algoritmo dedicato. I valori così ottenuti sono poi utilizzati come dati di input per i due modelli prognostici. Il primo mira a predire la vibrazione del cuscinetto in condizioni operative diverse rispetto ad una storia di degrado di riferimento. Diversamente, il secondo modello permette di prevedere il tempo rimanente prima del superamento di una soglia critica di volume equivalente danneggiato, indipendentemente da carico applicato e velocità di rotazione. Dunque, l’aspetto originale di quest’ultima parte ricade nello sviluppo di tecniche prognostiche basate su un nuovo indicatore introdotto ad-hoc in questo lavoro. I risultati ottenuti da tutti i modelli proposti sono validati grazie a metodi analitici di letteratura e a dati acquisiti in laboratorio per mezzo di un banco prova installato presso il Dipartimento di Ingegneria dell’Università di Ferrara. Il banco prova è stato utilizzato per realizzare due tipologie di prove, ossia test stazionari su cuscinetti che presentano difetti artificiali e prove di tipo run-to-failure su cuscinetti inizialmente sani. Le caratteristiche dei segnali di accelerazione acquisiti in entrambe le prove sono discussi in maniera esaustiva

    A Review of Bayesian Methods in Electronic Design Automation

    Full text link
    The utilization of Bayesian methods has been widely acknowledged as a viable solution for tackling various challenges in electronic integrated circuit (IC) design under stochastic process variation, including circuit performance modeling, yield/failure rate estimation, and circuit optimization. As the post-Moore era brings about new technologies (such as silicon photonics and quantum circuits), many of the associated issues there are similar to those encountered in electronic IC design and can be addressed using Bayesian methods. Motivated by this observation, we present a comprehensive review of Bayesian methods in electronic design automation (EDA). By doing so, we hope to equip researchers and designers with the ability to apply Bayesian methods in solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which can be sent to [email protected]

    Advanced Fault Diagnosis and Health Monitoring Techniques for Complex Engineering Systems

    Get PDF
    Over the last few decades, the field of fault diagnostics and structural health management has been experiencing rapid developments. The reliability, availability, and safety of engineering systems can be significantly improved by implementing multifaceted strategies of in situ diagnostics and prognostics. With the development of intelligence algorithms, smart sensors, and advanced data collection and modeling techniques, this challenging research area has been receiving ever-increasing attention in both fundamental research and engineering applications. This has been strongly supported by the extensive applications ranging from aerospace, automotive, transport, manufacturing, and processing industries to defense and infrastructure industries

    Machine Learning Methods with Noisy, Incomplete or Small Datasets

    Get PDF
    In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios

    Energy Data Analytics for Smart Meter Data

    Get PDF
    The principal advantage of smart electricity meters is their ability to transfer digitized electricity consumption data to remote processing systems. The data collected by these devices make the realization of many novel use cases possible, providing benefits to electricity providers and customers alike. This book includes 14 research articles that explore and exploit the information content of smart meter data, and provides insights into the realization of new digital solutions and services that support the transition towards a sustainable energy system. This volume has been edited by Andreas Reinhardt, head of the Energy Informatics research group at Technische Universität Clausthal, Germany, and Lucas Pereira, research fellow at Técnico Lisboa, Portugal
    corecore