47 research outputs found

    A New Hammerstein Model for Non-Linear System Identification

    Get PDF
    In the present work a newer type of black box nonlinear model in Hammerstein structure is proposed. The model has Wavelet Network coupled with Orthonormal Basis Functions which is capable of modeling a class of non-linear systems with acceptable accuracy. Wavelet basis functions have the property of localization in both the time and frequency domains which enables wavelet networks to approximate severe non-linearities using few number of parameters. Orthonormal Basis functions possess the ability to approximate any linear time invariant system using appropriate basis functions. The efficacy of the model in modeling is demonstrated using numerical examples

    Hierarchical gradient- and least squares-based iterative algorithms for input nonlinear output-error systems using the key term separation

    Get PDF
    This paper considers the parameter identification problems of the input nonlinear output-error (IN-OE) systems, that is the Hammerstein output-error systems. In order to overcome the excessive calculation amount of the over-parameterization method of the IN-OE systems. Through applying the hierarchial identification principle and decomposing the IN-OE system into three subsystems with a smaller number of parameters, we present the key term separation auxiliary model hierarchical gradient-based iterative algorithm and the key term separation auxiliary model hierarchical least squares-based iterative algorithm, which are called the key term separation auxiliary model three-stage gradient-based iterative algorithm and the key term separation auxiliary model three-stage least squares-based iterative algorithm. The comparison of the calculation amount and the simulation analysis indicate that the proposed algorithms are effective. (c) 2021 The Franklin Institute. Published by Elsevier Ltd. All rights reserved

    Identification of the nonlinear systems based on the kernel functions

    Get PDF
    Constructing an appropriate membership function is significant in fuzzy logic control. Based on the multi-model control theory, this article constructs a novel kernel function which can implement the fuzzification and defuzzification processes and reflect the dynamic quality of the nonlinear systems accurately. Then we focus on the identification problems of the nonlinear systems based on the kernel functions. Applying the hierarchical identification principle, we present the hierarchical stochastic gradient algorithm for the nonlinear systems. Meanwhile, the one-dimensional search methods are proposed to solve the problem of determining the optimal step sizes. In order to improve the parameter estimation accuracy, we propose the hierarchical multi-innovation forgetting factor stochastic gradient algorithm by introducing the forgetting factor and using the multi-innovation identification theory. The simulation example is provided to test the proposed algorithms from the aspects of parameter estimation accuracy and prediction performance

    Convex Identifcation of Stable Dynamical Systems

    Get PDF
    This thesis concerns the scalable application of convex optimization to data-driven modeling of dynamical systems, termed system identi cation in the control community. Two problems commonly arising in system identi cation are model instability (e.g. unreliability of long-term, open-loop predictions), and nonconvexity of quality-of- t criteria, such as simulation error (a.k.a. output error). To address these problems, this thesis presents convex parametrizations of stable dynamical systems, convex quality-of- t criteria, and e cient algorithms to optimize the latter over the former. In particular, this thesis makes extensive use of Lagrangian relaxation, a technique for generating convex approximations to nonconvex optimization problems. Recently, Lagrangian relaxation has been used to approximate simulation error and guarantee nonlinear model stability via semide nite programming (SDP), however, the resulting SDPs have large dimension, limiting their practical utility. The rst contribution of this thesis is a custom interior point algorithm that exploits structure in the problem to signi cantly reduce computational complexity. The new algorithm enables empirical comparisons to established methods including Nonlinear ARX, in which superior generalization to new data is demonstrated. Equipped with this algorithmic machinery, the second contribution of this thesis is the incorporation of model stability constraints into the maximum likelihood framework. Speci - cally, Lagrangian relaxation is combined with the expectation maximization (EM) algorithm to derive tight bounds on the likelihood function, that can be optimized over a convex parametrization of all stable linear dynamical systems. Two di erent formulations are presented, one of which gives higher delity bounds when disturbances (a.k.a. process noise) dominate measurement noise, and vice versa. Finally, identi cation of positive systems is considered. Such systems enjoy substantially simpler stability and performance analysis compared to the general linear time-invariant iv Abstract (LTI) case, and appear frequently in applications where physical constraints imply nonnegativity of the quantities of interest. Lagrangian relaxation is used to derive new convex parametrizations of stable positive systems and quality-of- t criteria, and substantial improvements in accuracy of the identi ed models, compared to existing approaches based on weighted equation error, are demonstrated. Furthermore, the convex parametrizations of stable systems based on linear Lyapunov functions are shown to be amenable to distributed optimization, which is useful for identi cation of large-scale networked dynamical systems

    Learning Multimodal Structures in Computer Vision

    Get PDF
    A phenomenon or event can be received from various kinds of detectors or under different conditions. Each such acquisition framework is a modality of the phenomenon. Due to the relation between the modalities of multimodal phenomena, a single modality cannot fully describe the event of interest. Since several modalities report on the same event introduces new challenges comparing to the case of exploiting each modality separately. We are interested in designing new algorithmic tools to apply sensor fusion techniques in the particular signal representation of sparse coding which is a favorite methodology in signal processing, machine learning and statistics to represent data. This coding scheme is based on a machine learning technique and has been demonstrated to be capable of representing many modalities like natural images. We will consider situations where we are not only interested in support of the model to be sparse, but also to reflect a-priorily known knowledge about the application in hand. Our goal is to extract a discriminative representation of the multimodal data that leads to easily finding its essential characteristics in the subsequent analysis step, e.g., regression and classification. To be more precise, sparse coding is about representing signals as linear combinations of a small number of bases from a dictionary. The idea is to learn a dictionary that encodes intrinsic properties of the multimodal data in a decomposition coefficient vector that is favorable towards the maximal discriminatory power. We carefully design a multimodal representation framework to learn discriminative feature representations by fully exploiting, the modality-shared which is the information shared by various modalities, and modality-specific which is the information content of each modality individually. Plus, it automatically learns the weights for various feature components in a data-driven scheme. In other words, the physical interpretation of our learning framework is to fully exploit the correlated characteristics of the available modalities, while at the same time leverage the modality-specific character of each modality and change their corresponding weights for different parts of the feature in recognition

    A unified metaheuristic and system-theoretic framework for petroleum reservoir management

    Get PDF
    With phenomenal rise in world population as well as robust economic growth in China, India and other emerging economies; the global demand for energy continues to grow in monumental proportions. Owing to its wide end-use capabilities, petroleum is without doubt, the world’s number one energy resource. The present demand for oil and credible future forecasts – which point to the fact that the demand is expected to increase in the coming decades – make it imperative that the E&P industry must device means to improve the present low recovery factor of hydrocarbon reservoirs. Efficiently tailored model-based optimization, estimation and control techniques within the ambit of a closed-loop reservoir management framework can play a significant role in achieving this objective. In this thesis, some fundamental reservoir engineering problems such as field development planning, production scheduling and control are formulated into different optimization problems. In this regard, field development optimization identifies the well placements that best maximizes hydrocarbon recovery, while production optimization identifies reservoir well-settings that maximizes total oil recovery or asset value, and finally, the implementation of a predictive controller algorithm which computes corrected well controls that minimizes the difference between actual outputs and simulated (or optimal) reference trajectory. We employ either deterministic or metaheuristic optimization algorithms, such that the choice of algorithm is purely based on the peculiarity of the underlying optimization problem. Altogether, we present a unified metaheuristic and system-theoretic framework for petroleum reservoir management. The proposed framework is essentially a closed-loop reservoir management approach with four key elements, namely: a new metaheuristic technique for field development optimization, a gradient-based adjoint formulation for well rates control, an effective predictive control strategy for tracking the gradient-based optimal production trajectory and an efficient model-updating (or history matching) – where well production data are used to systematically recalibrate reservoir model parameters in order to minimize the mismatch between actual and simulated measurements. Central to all of these problems is the use of white-box reservoir models which are employed in the well placement optimization and production settings optimization. However, a simple data-driven black-box model which results from the linearization of an identified nonlinear model is employed in the predictive controller algorithm. The benefits and efficiency of the approach in our work is demonstrated through the maximization of the NPV of waterflooded reservoir models that are subject to production and geological uncertainty. Our procedure provides an improvement in the NPV, and importantly, the predictive control algorithm ensures that this improved NPV are attainable as nearly as possible in practice
    corecore