828 research outputs found

    Clustering via kernel decomposition

    Get PDF
    Spectral clustering methods were proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this letter, the affinity matrix is created from the elements of a nonparametric density estimator and then decomposed to obtain posterior probabilities of class membership. Hyperparameters are selected using standard cross-validation methods

    Applications of nonlinear filters with the linear-in-the-parameter structure

    Get PDF

    Finding Structural Information of RF Power Amplifiers using an Orthogonal Non-Parametric Kernel Smoothing Estimator

    Full text link
    A non-parametric technique for modeling the behavior of power amplifiers is presented. The proposed technique relies on the principles of density estimation using the kernel method and is suited for use in power amplifier modeling. The proposed methodology transforms the input domain into an orthogonal memory domain. In this domain, non-parametric static functions are discovered using the kernel estimator. These orthogonal, non-parametric functions can be fitted with any desired mathematical structure, thus facilitating its implementation. Furthermore, due to the orthogonality, the non-parametric functions can be analyzed and discarded individually, which simplifies pruning basis functions and provides a tradeoff between complexity and performance. The results show that the methodology can be employed to model power amplifiers, therein yielding error performance similar to state-of-the-art parametric models. Furthermore, a parameter-efficient model structure with 6 coefficients was derived for a Doherty power amplifier, therein significantly reducing the deployment's computational complexity. Finally, the methodology can also be well exploited in digital linearization techniques.Comment: Matlab sample code (15 MB): https://dl.dropboxusercontent.com/u/106958743/SampleMatlabKernel.zi

    Backstepping PDE Design: A Convex Optimization Approach

    Get PDF
    Abstract\u2014Backstepping design for boundary linear PDE is formulated as a convex optimization problem. Some classes of parabolic PDEs and a first-order hyperbolic PDE are studied, with particular attention to non-strict feedback structures. Based on the compactness of the Volterra and Fredholm-type operators involved, their Kernels are approximated via polynomial functions. The resulting Kernel-PDEs are optimized using Sumof- Squares (SOS) decomposition and solved via semidefinite programming, with sufficient precision to guarantee the stability of the system in the L2-norm. This formulation allows optimizing extra degrees of freedom where the Kernel-PDEs are included as constraints. Uniqueness and invertibility of the Fredholm-type transformation are proved for polynomial Kernels in the space of continuous functions. The effectiveness and limitations of the approach proposed are illustrated by numerical solutions of some Kernel-PDEs

    Stochastic Physics-Informed Neural Ordinary Differential Equations

    Full text link
    Stochastic differential equations (SDEs) are used to describe a wide variety of complex stochastic dynamical systems. Learning the hidden physics within SDEs is crucial for unraveling fundamental understanding of these systems' stochastic and nonlinear behavior. We propose a flexible and scalable framework for training artificial neural networks to learn constitutive equations that represent hidden physics within SDEs. The proposed stochastic physics-informed neural ordinary differential equation framework (SPINODE) propagates stochasticity through the known structure of the SDE (i.e., the known physics) to yield a set of deterministic ODEs that describe the time evolution of statistical moments of the stochastic states. SPINODE then uses ODE solvers to predict moment trajectories. SPINODE learns neural network representations of the hidden physics by matching the predicted moments to those estimated from data. Recent advances in automatic differentiation and mini-batch gradient descent with adjoint sensitivity are leveraged to establish the unknown parameters of the neural networks. We demonstrate SPINODE on three benchmark in-silico case studies and analyze the framework's numerical robustness and stability. SPINODE provides a promising new direction for systematically unraveling the hidden physics of multivariate stochastic dynamical systems with multiplicative noise

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Control-Relevant System Identification using Nonlinear Volterra and Volterra-Laguerre Models

    Get PDF
    One of the key impediments to the wide-spread use of nonlinear control in industry is the availability of suitable nonlinear models. Empirical models, which are obtained from only the process input-output data, present a convenient alternative to the more involved fundamental models. An important advantage of the empirical models is that their structure can be chosen so as to facilitate the controller design problem. Many of the widely used empirical model structures are linear, and in some cases this basic model formulation may not be able to adequately capture the nonlinear process dynamics. One of the commonly used nonlinear dynamic empirical model structures is the Volterra model, and this work develops a systematic approach to the identification of third-order Volterra and Volterra-Laguerre models from process input-output data.First, plant-friendly input sequences are designed that exploit the Volterra model structure and use the prediction error variance (PEV) expression as a metric of model fidelity. Second, explicit estimator equations are derived for the linear, nonlinear diagonal, and higher-order sub-diagonal kernels using the tailored input sequences. Improvements in the sequence design are also presented which lead to a significant reduction in the amount of data required for identification. Finally, the third-order off-diagonal kernels are estimated using a cross-correlation approach. As an application of this technique, an isothermal polymerization reactor case study is considered.In order to overcome the noise sensitivity and highly parameterized nature of Volterra models, they are projected onto an orthonormal Laguerre basis. Two important variables that need to be selected for the projection are the Laguerre pole and the number of Laguerre filters. The Akaike Information Criterion (AIC) is used as a criterion to determine projected model quality. AIC includes contributions from both model size and model quality, with the latter characterized by the sum-squared error between the Volterra and the Volterra-Laguerre model outputs. Reduced Volterra-Laguerre models were also identified, and the control-relevance of identified Volterra-Laguerre models was evaluated in closed-loop using the model predictive control framework. Thus, this work presents a complete treatment of the problem of identifying nonlinear control-relevant Volterra and Volterra-Laguerre models from input-output data
    • 

    corecore