235 research outputs found

    Non-parametric modeling in non-intrusive load monitoring

    Get PDF
    Non-intrusive Load Monitoring (NILM) is an approach to the increasingly important task of residential energy analytics. Transparency of energy resources and consumption habits presents opportunities and benefits at all ends of the energy supply-chain, including the end-user. At present, there is no feasible infrastructure available to monitor individual appliances at a large scale. The goal of NILM is to provide appliance monitoring using only the available aggregate data, side-stepping the need for expensive and intrusive monitoring equipment. The present work showcases two self-contained, fully unsupervised NILM solutions: the first featuring non-parametric mixture models, and the second featuring non-parametric factorial Hidden Markov Models with explicit duration distributions. The present implementation makes use of traditional and novel constraints during inference, showing marked improvement in disaggregation accuracy with very little effect on computational cost, relative to the motivating work. To constitute a complete unsupervised solution, labels are applied to the inferred components using a Res-Net-based deep learning architecture. Although this preliminary approach to labelling proves less than satisfactory, it is well-founded and several opportunities for improvement are discussed. Both methods, along with the labelling network, make use of block-filtered data: a steady-state representation that removes transient behaviour and signal noise. A novel filter to achieve this steady-state representation that is both fast and reliable is developed and discussed at length. Finally, an approach to monitor the aggregate for novel events during deployment is developed under the framework of Bayesian surprise. The same non-parametric modelling can be leveraged to examine how the predictive and transitional distributions change given new windows of observations. This framework is also shown to have potential elsewhere, such as in regularizing models against over-fitting, which is an important problem in existing supervised NILM

    Universal Non-Intrusive Load Monitoring (UNILM) Using Filter Pipelines, Probabilistic Knapsack, and Labelled Partition Maps

    Full text link
    Being able to track appliances energy usage without the need of sensors can help occupants reduce their energy consumption to help save the environment all while saving money. Non-intrusive load monitoring (NILM) tries to do just that. One of the hardest problems NILM faces is the ability to run unsupervised -- discovering appliances without prior knowledge -- and to run independent of the differences in appliance mixes and operational characteristics found in various countries and regions. We propose a solution that can do this with the use of an advanced filter pipeline to preprocess the data, a Gaussian appliance model with a probabilistic knapsack algorithm to disaggregate the aggregate smart meter signal, and partition maps to label which appliances were found and how much energy they use no matter the country/region. Experimental results show that relatively complex appliance signals can be tracked accounting for 93.7% of the total aggregate energy consumed

    Hybrid Physics-informed Neural Networks for Dynamical Systems

    Get PDF
    Ordinary differential equations can describe many dynamic systems. When physics is well understood, the time-dependent responses are easily obtained numerically. The particular numerical method used for integration depends on the application. Unfortunately, when physics is not fully understood, the discrepancies between predictions and observed responses can be large and unacceptable. In this thesis, we show how to directly implement integration of ordinary differential equations through recurrent neural networks using Python. We leveraged modern machine learning frameworks, such as TensorFlow and Keras. Besides offering basic models capabilities (such as multilayer perceptrons and recurrent neural networks) and optimization methods, these frameworks offer powerful automatic differentiation. With that, our approach\u27s main advantage is that one can implement hybrid models combining physics-informed and data-driven kernels, where data-driven kernels are used to reduce the gap between predictions and observations. In order to illustrate our approach, we used two case studies. The first one consisted of performing fatigue crack growth integration through Euler\u27s forward method using a hybrid model combining a data-driven stress intensity range model with a physics-based crack length increment model. The second case study consisted of performing model parameter identification of a dynamic two-degree-of-freedom system through Runge-Kutta integration. Additionally, we performed a numerical experiment for fleet prognosis with hybrid models. The problem consists of predicting fatigue crack length for a fleet of aircraft. The hybrid models are trained using full input observations (far-field loads) and very limited output observations (crack length data for only a portion of the fleet). The results demonstrate that our proposed physics-informed recurrent neural network can model fatigue crack growth even when the observed distribution of crack length does not match the fleet distribution

    SOLID-SHELL FINITE ELEMENT MODELS FOR EXPLICIT SIMULATIONS OF CRACK PROPAGATION IN THIN STRUCTURES

    Get PDF
    Crack propagation in thin shell structures due to cutting is conveniently simulated using explicit finite element approaches, in view of the high nonlinearity of the problem. Solidshell elements are usually preferred for the discretization in the presence of complex material behavior and degradation phenomena such as delamination, since they allow for a correct representation of the thickness geometry. However, in solid-shell elements the small thickness leads to a very high maximum eigenfrequency, which imply very small stable time-steps. A new selective mass scaling technique is proposed to increase the time-step size without affecting accuracy. New ”directional” cohesive interface elements are used in conjunction with selective mass scaling to account for the interaction with a sharp blade in cutting processes of thin ductile shells

    Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge

    Get PDF
    In the context of complex applications from engineering sciences the solution of identification problems still poses a fundamental challenge. In terms of Uncertainty Quantification (UQ), the identification problem can be stated as a separation task for structural model and parameter uncertainty. This thesis provides new insights and methods to tackle this challenge and demonstrates these developments on an industrial benchmark use case combining simulation and real-world measurement data. While significant progress has been made in development of methods for model parameter inference, still most of those methods operate under the assumption of a perfect model. For a full, unbiased quantification of uncertainties in inverse problems, it is crucial to consider all uncertainty sources. The present work develops methods for inference of deterministic and aleatoric model parameters from noisy measurement data with explicit consideration of model discrepancy and additional quantification of the associated uncertainties using a Bayesian approach. A further important ingredient is surrogate modeling with Polynomial Chaos Expansion (PCE), enabling sampling from Bayesian posterior distributions with complex simulation models. Based on this, a novel identification strategy for separation of different sources of uncertainty is presented. Discrepancy is approximated by orthogonal functions with iterative determination of optimal model complexity, weakening the problem inherent identifiability problems. The model discrepancy quantification is complemented with studies to statistical approximate numerical approximation error. Additionally, strategies for approximation of aleatoric parameter distributions via hierarchical surrogate-based sampling are developed. The proposed method based on Approximate Bayesian Computation (ABC) with summary statistics estimates the posterior computationally efficient, in particular for large data. Furthermore, the combination with divergence-based subset selection provides a novel methodology for UQ in stochastic inverse problems inferring both, model discrepancy and aleatoric parameter distributions. Detailed analysis in numerical experiments and successful application to the challenging industrial benchmark problem -- an electric motor test bench -- validates the proposed methods

    Lithium-ion Battery Prognosis with Variational Hybrid Physics-informed Neural Networks

    Get PDF
    Lithium-ion batteries are an increasingly popular source of power for many electric applications. Applications range from electric cars, driven by thousands of people every day, to existing and future air vehicles, such as unmanned aircraft vehicles (UAVs) and urban air mobility (UAM) drones. Therefore, robust modeling approaches are essential to ensure high reliability levels by monitoring battery state-of-charge (SOC) and forecasting the remaining useful life (RUL). Building principled-based models is challenging due to the complex electrochemistry that governs battery operation, which would entail computationally expensive models not suited for prognosis and health management applications. Alternatively, reduced-order models can be used and have the advantage of capturing the overall behavior of battery discharge, although they suffer from simplifications and residual discrepancy. We propose a hybrid solution for Li-ion battery discharge and aging prediction that directly implements models based on first-principle within modern recurrent neural networks. While reduced-order models describe part of the voltage discharge under constant or variable loading conditions, data-driven kernels reduce the gap between predictions and observations. We developed and validated our approach using the NASA Prognostics Data Repository Battery dataset, which contains experimental discharge data on Li-ion batteries obtained in a controlled environment. Our hybrid model tracks aging parameters connected to the residual capacity of the battery. In addition, we use a Bayesian approach to merge fleet-wide data in the form of priors with battery-specific discharge cycles, where the battery capacity is fully available (complete data) or only partially available (censored data). The model\u27s predictive capability is monitored throughout battery usage. This way, our proposed approach indicates when significant updates to the hybrid model are needed. Our Bayesian implementation of the hybrid variational physics-informed neural network can reliably predict the battery\u27s future residual capacity, even in cases where previous battery usage history is unknown

    Reduced Order Modeling of Geophysical Flows Using Physics-Based and Data-Driven Modeling Techniques

    Get PDF
    The growing advancements in computational power, algorithmic innovation, and the availability of data resources have started shaping the way we numerically model physical problems now and for years to come. Many of the physical phenomena, whether it be in natural sciences and engineering disciplines or social sciences, are described by a set of ordinary differential equations or partial differential equations which is referred as the mathematical model of a physical system. High-fidelity numerical simulations provide us valuable information about the flow behavior of the physical system by solving these sets of equations using suitable numerical schemes and modeling tools. However, despite the progress in software engineering and processor technologies, the computational burden of high-fidelity simulation is still a limiting factor for many practical problems in different research areas, specifically for the large-scale physical problems with high spatio-temporal variabilities such as atmospheric and geophysical flows. Therefore, the development of efficient and robust algorithms that aims at achieving the maximum attainable quality of numerical simulations with optimal computational costs has become an active research question in computational fluid dynamics community. As an alternative to existing techniques for computational cost reduction, reduced order modeling (ROM) strategies have been proven to be successful in reducing the computational costs significantly with little compromise in physical accuracy. In this thesis, we utilize the state of the art physics-based and data-driven modeling tools to develop efficient and improved ROM frameworks for large-scale geophysical flows by addressing the issues associated with conventional ROM approaches. We first develop an improved physics-based ROM framework by considering the analogy between dynamic eddy viscosity large eddy simulation (LES) model and truncated modal projection, then we present a hybrid modeling approach by combining projection based ROM and extreme learning machine (ELM) neural network, and finally, we devise a fully data-driven ROM framework utilizing long short-term memory (LSTM) recurrent neural network architecture. As a representative benchmark test case, we consider a two-dimensional quasi-geostrophic (QG) ocean circulation model which, in general, displays an enormous range of fluctuating spatial and temporal scales. Throughout the thesis, we demonstrate our findings in terms of time series evolution of the field values and mean flow patterns, which suggest that the proposed ROM frameworks are robust and capable of predicting such fluid flows in an extremely efficient way compared to the conventional projection based ROM framework
    • …
    corecore