168,517 research outputs found

    An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition

    Get PDF
    Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data

    Memory embedded non-intrusive reduced order modeling of non-ergodic flows

    Full text link
    Generating a digital twin of any complex system requires modeling and computational approaches that are efficient, accurate, and modular. Traditional reduced order modeling techniques are targeted at only the first two but the novel non-intrusive approach presented in this study is an attempt at taking all three into account effectively compared to their traditional counterparts. Based on dimensionality reduction using proper orthogonal decomposition (POD), we introduce a long short-term memory (LSTM) neural network architecture together with a principal interval decomposition (PID) framework as an enabler to account for localized modal deformation, which is a key element in accurate reduced order modeling of convective flows. Our applications for convection dominated systems governed by Burgers, Navier-Stokes, and Boussinesq equations demonstrate that the proposed approach yields significantly more accurate predictions than the POD-Galerkin method, and could be a key enabler towards near real-time predictions of unsteady flows

    Inference of Sparse Networks with Unobserved Variables. Application to Gene Regulatory Networks

    Full text link
    Networks are a unifying framework for modeling complex systems and network inference problems are frequently encountered in many fields. Here, I develop and apply a generative approach to network inference (RCweb) for the case when the network is sparse and the latent (not observed) variables affect the observed ones. From all possible factor analysis (FA) decompositions explaining the variance in the data, RCweb selects the FA decomposition that is consistent with a sparse underlying network. The sparsity constraint is imposed by a novel method that significantly outperforms (in terms of accuracy, robustness to noise, complexity scaling, and computational efficiency) Bayesian methods and MLE methods using l1 norm relaxation such as K-SVD and l1--based sparse principle component analysis (PCA). Results from simulated models demonstrate that RCweb recovers exactly the model structures for sparsity as low (as non-sparse) as 50% and with ratio of unobserved to observed variables as high as 2. RCweb is robust to noise, with gradual decrease in the parameter ranges as the noise level increases.Comment: 8 pages, 5 figure

    BlenX-based compositional modeling of complex reaction mechanisms

    Full text link
    Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality

    Nonintrusive Uncertainty Quantification for automotive crash problems with VPS/Pamcrash

    Full text link
    Uncertainty Quantification (UQ) is a key discipline for computational modeling of complex systems, enhancing reliability of engineering simulations. In crashworthiness, having an accurate assessment of the behavior of the model uncertainty allows reducing the number of prototypes and associated costs. Carrying out UQ in this framework is especially challenging because it requires highly expensive simulations. In this context, surrogate models (metamodels) allow drastically reducing the computational cost of Monte Carlo process. Different techniques to describe the metamodel are considered, Ordinary Kriging, Polynomial Response Surfaces and a novel strategy (based on Proper Generalized Decomposition) denoted by Separated Response Surface (SRS). A large number of uncertain input parameters may jeopardize the efficiency of the metamodels. Thus, previous to define a metamodel, kernel Principal Component Analysis (kPCA) is found to be effective to simplify the model outcome description. A benchmark crash test is used to show the efficiency of combining metamodels with kPCA

    Reduced order modeling of fluid flows: Machine learning, Kolmogorov barrier, closure modeling, and partitioning

    Full text link
    In this paper, we put forth a long short-term memory (LSTM) nudging framework for the enhancement of reduced order models (ROMs) of fluid flows utilizing noisy measurements. We build on the fact that in a realistic application, there are uncertainties in initial conditions, boundary conditions, model parameters, and/or field measurements. Moreover, conventional nonlinear ROMs based on Galerkin projection (GROMs) suffer from imperfection and solution instabilities due to the modal truncation, especially for advection-dominated flows with slow decay in the Kolmogorov width. In the presented LSTM-Nudge approach, we fuse forecasts from a combination of imperfect GROM and uncertain state estimates, with sparse Eulerian sensor measurements to provide more reliable predictions in a dynamical data assimilation framework. We illustrate the idea with the viscous Burgers problem, as a benchmark test bed with quadratic nonlinearity and Laplacian dissipation. We investigate the effects of measurements noise and state estimate uncertainty on the performance of the LSTM-Nudge behavior. We also demonstrate that it can sufficiently handle different levels of temporal and spatial measurement sparsity. This first step in our assessment of the proposed model shows that the LSTM nudging could represent a viable realtime predictive tool in emerging digital twin systems
    • …
    corecore