90 research outputs found

    Linear System Identification - A Survey

    Get PDF
    In this paper we give an introductory survey on the theory of identification of (in general MIMO) linear systems from (discrete) time series data. The main parts are: Structure theory for linear systems, asymptotic properties of maximum likelihood type estimators, estimation of the dynamic specification by methods based on information criteria and finally, extensions and alternative approaches such as identification of unstable systems and errors-in-variables

    Time series econometrics

    Get PDF

    On continuity of l-infinite optimal models

    Get PDF

    System Identification. Paper Presented on IIASA's 20th Anniversary

    Get PDF
    IIASA celebrated its twentieth anniversary on May 12-13 with its fourth general conference, IIASA '92: An International Conference on the Challenges to Systems Analysis in the Nineties and Beyond. The conference focused on the relations between environment and development and on studies that integrate the methods and findings of several disciplines. The role of systems analysis, a method especially suited to taking account of the linkages between phenomena and of the hierarchical organization of the natural and social world, was also assessed, taking account of the implications this has for IIASA's research approach and activities. This paper is one of six IIASA Collaborative Papers published as part of the report on the conference, an earlier instalment of which was Science and Sustainability, published in 1992. The term "identification" came into use by economists in the late 1920s, but the general idea has existed at least as long as the use of mathematics in science, and is applicable to natural as well as social phenomena. Identification is finding the underlying structure that generated the observed data. In fact we never find the structure itself, but at best a model that is uniquely capable of doing what the structure does. Usually identification is sought by solving for the values of parameters in a given set of equations -- often linear equations. Professor Deistler would broaden the search beyond finding the right coefficients in a set of linear equations; his method permits the use of intuition as well as fitting to find the most likely model

    Group equivariant neural posterior estimation

    Get PDF
    Simulation-based inference with conditional neural density estimators is a powerful approach to solving inverse problems in science. However, these methods typically treat the underlying forward model as a black box, with no way to exploit geometric properties such as equivariances. Equivariances are common in scientific models, however integrating them directly into expressive inference networks (such as normalizing flows) is not straightforward. We here describe an alternative method to incorporate equivariances under joint transformations of parameters and data. Our method -- called group equivariant neural posterior estimation (GNPE) -- is based on self-consistently standardizing the "pose" of the data while estimating the posterior over parameters. It is architecture-independent, and applies both to exact and approximate equivariances. As a real-world application, we use GNPE for amortized inference of astrophysical binary black hole systems from gravitational-wave observations. We show that GNPE achieves state-of-the-art accuracy while reducing inference times by three orders of magnitude

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics

    Identifiability of Structural Singular Vector Autoregressive Models

    Get PDF
    We generalize well‐known results on structural identifiability of vector autoregressive (VAR) models to the case where the innovation covariance matrix has reduced rank. Singular structural VAR models appear, for example, as solutions of rational expectation models where the number of shocks is usually smaller than the number of endogenous variables, and as an essential building block in dynamic factor models. We show that order conditions for identifiability are misleading in the singular case and we provide a rank condition for identifiability of the noise parameters. Since the Yule‐Walker (YW) equations may have multiple solutions, we analyse the effect of restricting system parameters on over‐ and underidentification in detail and provide easily verifiable conditions.Peer reviewe
    • 

    corecore