529 research outputs found

    Actuator Fault Diagnosis with Application to a Diesel Engine Testbed

    Get PDF
    This work addresses the issues of actuator fault detection and isolation for diesel engines. We are particularly interested in faults affecting the exhaust gas recirculation (EGR) and the variable geometry turbocharger (VGT) actuator valves. A bank of observer-based residuals is designed using a nonlinear mean value model of diesel engines. Each residual on the proposed scheme is based on a nonlinear unknown input observer and designed to be insensitive to only one fault. By using this scheme, each actuator fault can be easily isolated since only one residual goes to zero while the others do not. A decision algorithm based on multi-CUSUM is used. The performances of the proposed approach are shown through a real application to a Caterpillar 3126b engine

    Multiscale and High-Dimensional Problems

    Get PDF
    High-dimensional problems appear naturally in various scientific areas. Two primary examples are PDEs describing complex processes in computational chemistry and physics, and stochastic/ parameter-dependent PDEs arising in uncertainty quantification and optimal control. Other highly visible examples are big data analysis including regression and classification which typically encounters high-dimensional data as input and/or output. High dimensional problems cannot be solved by traditional numerical techniques, because of the so-called curse of dimensionality. Rather, they require the development of novel theoretical and computational approaches to make them tractable and to capture fine resolutions and relevant features. Paradoxically, increasing computational power may even serve to heighten this demand, since the wealth of new computational data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information in a high dimensional setting constitute challenging tasks from both theoretical and numerical perspective. The last decade has seen the emergence of several new computational methodologies which address the obstacles to solving high dimensional problems. These include adaptive methods based on mesh refinement or sparsity, random forests, model reduction, compressed sensing, sparse grid and hyperbolic wavelet approximations, and various new tensor structures. Their common features are the nonlinearity of the solution method that prioritize variables and separate solution characteristics living on different scales. These methods have already drastically advanced the frontiers of computability for certain problem classes. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computational methods and to promote the exchange of ideas emerging in various disciplines about how to treat multiscale and high-dimensional problems

    Kernel Methods are Competitive for Operator Learning

    Full text link
    We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Net (DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.]. We consider the setting where the input/output spaces of target operator G† : U→V\mathcal{G}^\dagger\,:\, \mathcal{U}\to \mathcal{V} are reproducing kernel Hilbert spaces (RKHS), the data comes in the form of partial observations ϕ(ui),φ(vi)\phi(u_i), \varphi(v_i) of input/output functions vi=G†(ui)v_i=\mathcal{G}^\dagger(u_i) (i=1,…,Ni=1,\ldots,N), and the measurement operators ϕ : U→Rn\phi\,:\, \mathcal{U}\to \mathbb{R}^n and φ : V→Rm\varphi\,:\, \mathcal{V} \to \mathbb{R}^m are linear. Writing ψ : Rn→U\psi\,:\, \mathbb{R}^n \to \mathcal{U} and χ : Rm→V\chi\,:\, \mathbb{R}^m \to \mathcal{V} for the optimal recovery maps associated with ϕ\phi and φ\varphi, we approximate G†\mathcal{G}^\dagger with Gˉ=χ∘fˉ∘ϕ\bar{\mathcal{G}}=\chi \circ \bar{f} \circ \phi where fˉ\bar{f} is an optimal recovery approximation of f†:=φ∘G†∘ψ : Rn→Rmf^\dagger:=\varphi \circ \mathcal{G}^\dagger \circ \psi\,:\,\mathbb{R}^n \to \mathbb{R}^m. We show that, even when using vanilla kernels (e.g., linear or Mat\'{e}rn), our approach is competitive in terms of cost-accuracy trade-off and either matches or beats the performance of NN methods on a majority of benchmarks. Additionally, our framework offers several advantages inherited from kernel methods: simplicity, interpretability, convergence guarantees, a priori error estimates, and Bayesian uncertainty quantification. As such, it can serve as a natural benchmark for operator learning.Comment: 35 pages, 10 figure

    Aerospace medicine and biology: A continuing bibliography with indexes, supplement 204

    Get PDF
    This bibliography lists 140 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1980

    A Bayesian Approach to Modelling Biological Pattern Formation with Limited Data

    Full text link
    Pattern formation in biological tissues plays an important role in the development of living organisms. Since the classical work of Alan Turing, a pre-eminent way of modelling has been through reaction-diffusion mechanisms. More recently, alternative models have been proposed, that link dynamics of diffusing molecular signals with tissue mechanics. In order to distinguish among different models, they should be compared to experimental observations. However, in many experimental situations only the limiting, stationary regime of the pattern formation process is observable, without knowledge of the transient behaviour or the initial state. The unstable nature of the underlying dynamics in all alternative models seriously complicates model and parameter identification, since small changes in the initial condition lead to distinct stationary patterns. To overcome this problem the initial state of the model can be randomised. In the latter case, fixed values of the model parameters correspond to a family of patterns rather than a fixed stationary solution, and standard approaches to compare pattern data directly with model outputs, e.g., in the least squares sense, are not suitable. Instead, statistical characteristics of the patterns should be compared, which is difficult given the typically limited amount of available data in practical applications. To deal with this problem, we extend a recently developed statistical approach for parameter identification using pattern data, the so-called Correlation Integral Likelihood (CIL) method. We suggest modifications that allow increasing the accuracy of the identification process without resizing the data set. The proposed approach is tested using different classes of pattern formation models. For all considered equations, parallel GPU-based implementations of the numerical solvers with efficient time stepping schemes are provided.Comment: More compact version of the text and figures, results unchange

    Learning Theory and Approximation

    Get PDF
    The main goal of this workshop – the third one of this type at the MFO – has been to blend mathematical results from statistical learning theory and approximation theory to strengthen both disciplines and use synergistic effects to work on current research questions. Learning theory aims at modeling unknown function relations and data structures from samples in an automatic manner. Approximation theory is naturally used for the advancement and closely connected to the further development of learning theory, in particular for the exploration of new useful algorithms, and for the theoretical understanding of existing methods. Conversely, the study of learning theory also gives rise to interesting theoretical problems for approximation theory such as the approximation and sparse representation of functions or the construction of rich kernel reproducing Hilbert spaces on general metric spaces. This workshop has concentrated on the following recent topics: Pitchfork bifurcation of dynamical systems arising from mathematical foundations of cell development; regularized kernel based learning in the Big Data situation; deep learning; convergence rates of learning and online learning algorithms; numerical refinement algorithms to learning; statistical robustness of regularized kernel based learning
    • …
    corecore