214 research outputs found

    Nonlinear dynamic process monitoring using kernel methods

    Get PDF
    The application of kernel methods in process monitoring is well established. How- ever, there is need to extend existing techniques using novel implementation strate- gies in order to improve process monitoring performance. For example, process monitoring using kernel principal component analysis (KPCA) have been reported. Nevertheless, the e ect of combining kernel density estimation (KDE)-based control limits with KPCA for nonlinear process monitoring has not been adequately investi- gated and documented. Therefore, process monitoring using KPCA and KDE-based control limits is carried out in this work. A new KPCA-KDE fault identi cation technique is also proposed. Furthermore, most process systems are complex and data collected from them have more than one characteristic. Therefore, three techniques are developed in this work to capture more than one process behaviour. These include the linear latent variable-CVA (LLV-CVA), kernel CVA using QR decomposition (KCVA-QRD) and kernel latent variable-CVA (KLV-CVA). LLV-CVA captures both linear and dynamic relations in the process variables. On the other hand, KCVA-QRD and KLV-CVA account for both nonlinearity and pro- cess dynamics. The CVA with kernel density estimation (CVA-KDE) technique reported does not address the nonlinear problem directly while the regular kernel CVA approach require regularisation of the constructed kernel data to avoid com- putational instability. However, this compromises process monitoring performance. The results of the work showed that KPCA-KDE is more robust and detected faults higher and earlier than the KPCA technique based on Gaussian assumption of pro- cess data. The nonlinear dynamic methods proposed also performed better than the afore-mentioned existing techniques without employing the ridge-type regulari- sation

    The Random Feature Model for Input-Output Maps between Banach Spaces

    Get PDF
    Well known to the machine learning community, the random feature model, originally introduced by Rahimi and Recht in 2008, is a parametric approximation to kernel interpolation or regression methods. It is typically used to approximate functions mapping a finite-dimensional input space to the real line. In this paper, we instead propose a methodology for use of the random feature model as a data-driven surrogate for operators that map an input Banach space to an output Banach space. Although the methodology is quite general, we consider operators defined by partial differential equations (PDEs); here, the inputs and outputs are themselves functions, with the input parameters being functions required to specify the problem, such as initial data or coefficients, and the outputs being solutions of the problem. Upon discretization, the model inherits several desirable attributes from this infinite-dimensional, function space viewpoint, including mesh-invariant approximation error with respect to the true PDE solution map and the capability to be trained at one mesh resolution and then deployed at different mesh resolutions. We view the random feature model as a non-intrusive data-driven emulator, provide a mathematical framework for its interpretation, and demonstrate its ability to efficiently and accurately approximate the nonlinear parameter-to-solution maps of two prototypical PDEs arising in physical science and engineering applications: viscous Burgers' equation and a variable coefficient elliptic equation

    The Random Feature Model for Input-Output Maps between Banach Spaces

    Get PDF
    Well known to the machine learning community, the random feature model is a parametric approximation to kernel interpolation or regression methods. It is typically used to approximate functions mapping a finite-dimensional input space to the real line. In this paper, we instead propose a methodology for use of the random feature model as a data-driven surrogate for operators that map an input Banach space to an output Banach space. Although the methodology is quite general, we consider operators defined by partial differential equations (PDEs); here, the inputs and outputs are themselves functions, with the input parameters being functions required to specify the problem, such as initial data or coefficients, and the outputs being solutions of the problem. Upon discretization, the model inherits several desirable attributes from this infinite-dimensional viewpoint, including mesh-invariant approximation error with respect to the true PDE solution map and the capability to be trained at one mesh resolution and then deployed at different mesh resolutions. We view the random feature model as a non-intrusive data-driven emulator, provide a mathematical framework for its interpretation, and demonstrate its ability to efficiently and accurately approximate the nonlinear parameter-to-solution maps of two prototypical PDEs arising in physical science and engineering applications: viscous Burgers' equation and a variable coefficient elliptic equation.Comment: To appear in SIAM Journal on Scientific Computing; 32 pages, 9 figure

    Nonparametric Sparsity and Regularization

    Get PDF
    In this work we are interested in the problems of supervised learning and variable selection when the input-output dependence is described by a nonlinear function depending on a few variables. Our goal is to consider a sparse nonparametric model, hence avoiding linear or additive models. The key idea is to measure the importance of each variable in the model by making use of partial derivatives. Based on this intuition we propose and study a new regularizer and a corresponding least squares regularization scheme. Using concepts and results from the theory of reproducing kernel Hilbert spaces and proximal methods, we show that the proposed learning algorithm corresponds to a minimization problem which can be provably solved by an iterative procedure. The consistency properties of the obtained estimator are studied both in terms of prediction and selection performance. An extensive empirical analysis shows that the proposed method performs favorably with respect to the state-of-the-art

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted ℓ2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view

    Approximate Kernel Orthogonalization for Antenna Array Processing

    Get PDF
    We present a method for kernel antenna array processing using Gaussian kernels as basis functions. The method first identifies the data clusters by using a modified sparse greedy matrix approximation. Then, the algorithm performs model reduction in order to try to reduce the final size of the beamformer. The method is tested with simulations that include two arrays made of two and seven printed half wavelength thick dipoles, in scenarios with 4 and 5 users coming from different angles of arrival. The antenna parameters are simulated for all DOAs, and include the dipole radiation pattern and the mutual coupling effects of the array. The method is compared with other state-of-the-art nonlinear processing methods, to show that the presented algorithm has near optimal capabilities together with a low computational burden.Spanish Governnment under Grant TEC2008-02473IEEE Antennas and Propagation SocietyPublicad

    Sparse online Gaussian process adaptation for incremental backstepping flight control

    Get PDF
    Presence of uncertainties caused by unforeseen malfunctions in actuation or measurement systems or changes in aircraft behaviour could lead to aircraft loss-of-control during flight. This paper considers sparse online Gaussian Processes (GP) adaptive augmentation for Incremental Backstepping (IBKS) flight control. IBKS uses angular accelerations and control deflections to reduce the dependency on the aircraft model. However, it requires knowledge of the relationship between inner and outer loops and control effectiveness. Proposed indirect adaptation significantly reduces model dependency. Global uniform ultimate boundness is proved for the resultant GP adaptive IBKS. Conducted research shows that if the input-affine property is violated, e.g., in severe conditions with a combination of multiple failures, the IBKS can lose stability. Meanwhile, the proposed sparse GP-based estimator provides fast online identification and the resultant controller demonstrates improved stability and tracking performance
    • 

    corecore