22,509 research outputs found

    Meta-State-Space Learning: An Identification Approach for Stochastic Dynamical Systems

    Full text link
    Available methods for identification of stochastic dynamical systems from input-output data generally impose restricting structural assumptions on either the noise structure in the data-generating system or the possible state probability distributions. In this paper, we introduce a novel identification method of such systems, which results in a dynamical model that is able to produce the time-varying output distribution accurately without taking restrictive assumptions on the data-generating process. The method is formulated by first deriving a novel and exact representation of a wide class of nonlinear stochastic systems in a so-called meta-state-space form, where the meta-state can be interpreted as a parameter vector of a state probability function space parameterization. As the resulting representation of the meta-state dynamics is deterministic, we can capture the stochastic system based on a deterministic model, which is highly attractive for identification. The meta-state-space representation often involves unknown and heavily nonlinear functions, hence, we propose an artificial neural network (ANN)-based identification method capable of efficiently learning nonlinear meta-state-space models. We demonstrate that the proposed identification method can obtain models with a log-likelihood close to the theoretical limit even for highly nonlinear, highly stochastic systems.Comment: Submitted to Automatic

    Global parameter identification and control of nonlinearly parameterized systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2002.Includes bibliographical references (leaves 109-114).Nonlinearly parameterized (NLP) systems are ubiquitous in nature and many fields of science and engineering. Despite the wide and diverse range of applications, there exist relatively few results in control systems literature which exploit the structure of the nonlinear parameterization. A vast majority of presently applicable global control design approaches to systems with NLP, make use of either feedback-linearization, or assume linear parameterization, and ignore the specific structure of the nonlinear parameterization. While this type of approach may guarantee stability, it introduced three major drawbacks. First, they produce no additional information about the nonlinear parameters. Second, they may require large control authority and actuator bandwidth, which makes them unsuitable for some applications. Third, they may simply result in unacceptably poor performance. All of these inadequacies are amplified further when parametric uncertainties are present. What is necessary is a systematic adaptive approach to identification and control of such systems that explicitly accommodates the presence of nonlinear parameters that may not be known precisely. This thesis presents results in both adaptive identification and control of NLP systems. An adaptive controller is presented for NLP systems with a triangular structure. The presence of the triangular structure together with nonlinear parameterization makes standard methods such as back-stepping, and variable structure control inapplicable. A concept of bounding functions is combined with min-max adaptation strategies and recursive error formulation to result in a globally stabilizing controller.(cont.) A large class of nonlinear systems including cascaded LNL (linear-nonlinear-linear) systems are shown to be controllable using this approach. In the context of parameter identification, results are derived for two classes of NLP systems. The first concerns systems with convex/concave parameterization, where min-max algorithms are essential for global stability. Stronger conditions of persistent excitation are shown to be necessary to overcome the presence of multiple equilibrium points which are introduced due to the stabilization aspects of the min-max algorithms. These conditions imply that the min-max estimator must periodically employ the local gradient information in order to guarantee parameter convergence. The second class of NLP systems considered in this concerns monotonically parameterized systems, of which neural networks are a specific example. It is shown that a simple algorithm based on local gradient information suffices for parameter identification. Conditions on the external input under which the parameter estimates converge to the desired set starting from arbitrary values are derived. The proof makes direct use of the monotonicity in the parameters, which in turn allows local gradients to be self-similar and therefore introduces a desirable invariance property. By suitably exploiting this invariance property and defining a sequence of distance metrics, global convergence is proved. Such a proof of global convergence is in contrast to most other existing results in the area of nonlinear parameterization, in general, and neural networks in particular.by Aleksandar M. KojiÄ.Ph.D

    Stochastic control system parameter identifiability

    Get PDF
    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters

    Stable Nonlinear Identification From Noisy Repeated Experiments via Convex Optimization

    Get PDF
    This paper introduces new techniques for using convex optimization to fit input-output data to a class of stable nonlinear dynamical models. We present an algorithm that guarantees consistent estimates of models in this class when a small set of repeated experiments with suitably independent measurement noise is available. Stability of the estimated models is guaranteed without any assumptions on the input-output data. We first present a convex optimization scheme for identifying stable state-space models from empirical moments. Next, we provide a method for using repeated experiments to remove the effect of noise on these moment and model estimates. The technique is demonstrated on a simple simulated example

    Parameter Estimation of Sigmoid Superpositions: Dynamical System Approach

    Full text link
    Superposition of sigmoid function over a finite time interval is shown to be equivalent to the linear combination of the solutions of a linearly parameterized system of logistic differential equations. Due to the linearity with respect to the parameters of the system, it is possible to design an effective procedure for parameter adjustment. Stability properties of this procedure are analyzed. Strategies shown in earlier studies to facilitate learning such as randomization of a learning sequence and adding specially designed disturbances during the learning phase are requirements for guaranteeing convergence in the learning scheme proposed.Comment: 30 pages, 7 figure
    • …
    corecore