25 research outputs found

    Prediction error identification of linear dynamic networks with rank-reduced noise

    Full text link
    Dynamic networks are interconnected dynamic systems with measured node signals and dynamic modules reflecting the links between the nodes. We address the problem of \red{identifying a dynamic network with known topology, on the basis of measured signals}, for the situation of additive process noise on the node signals that is spatially correlated and that is allowed to have a spectral density that is singular. A prediction error approach is followed in which all node signals in the network are jointly predicted. The resulting joint-direct identification method, generalizes the classical direct method for closed-loop identification to handle situations of mutually correlated noise on inputs and outputs. When applied to general dynamic networks with rank-reduced noise, it appears that the natural identification criterion becomes a weighted LS criterion that is subject to a constraint. This constrained criterion is shown to lead to maximum likelihood estimates of the dynamic network and therefore to minimum variance properties, reaching the Cramer-Rao lower bound in the case of Gaussian noise.Comment: 17 pages, 5 figures, revision submitted for publication in Automatica, 4 April 201

    Single module identifiability in linear dynamic networks

    Get PDF
    A recent development in data-driven modelling addresses the problem of identifying dynamic models of interconnected systems, represented as linear dynamic networks. For these networks the notion network identifiability has been introduced recently, which reflects the property that different network models can be distinguished from each other. Network identifiability is extended to cover the uniqueness of a single module in the network model. Conditions for single module identifiability are derived and formulated in terms of path-based topological properties of the network models.Comment: 6 pages, 2 figures, submitted to Control Systems Letters (L-CSS) and the 57th IEEE Conference on Decision and Control (CDC

    Abstractions of linear dynamic networks for input selection in local module identification

    Full text link
    In abstractions of linear dynamic networks, selected node signals are removed from the network, while keeping the remaining node signals invariant. The topology and link dynamics, or modules, of an abstracted network will generally be changed compared to the original network. Abstractions of dynamic networks can be used to select an appropriate set of node signals that are to be measured, on the basis of which a particular local module can be estimated. A method is introduced for network abstraction that generalizes previously introduced algorithms, as e.g. immersion and the method of indirect inputs. For this abstraction method it is shown under which conditions on the selected signals a particular module will remain invariant. This leads to sets of conditions on selected measured node variables that allow identification of the target module.Comment: 17 pages, 7 figures. Paper to appear in Automatica, Vol. 117, July 202

    Identifiability and Identification Methods for Dynamic Networks

    Get PDF

    Identification of dynamic networks operating in the presence of algebraic loops

    No full text
    When identifying all modules in a dynamic network it is natural to treat all node variables in a symmetric way, i.e. not having pre-assigned roles of `inputs' and `outputs'. In a prediction error setting this implies that every node signal is predicted on the basis of all other nodes. A usual restriction in direct and joint-io methods for dynamic network and closed-loop identification is the need for a delay to be present in every loop (absence of algebraic loops). It is shown that the classical one-step-ahead predictor that incorporates direct feedt-hrough terms in models can not be used in a dynamic network setting. It has to be replaced by a network predictor, for which consistency results are shown when applied in a direct identification method. The result is a one-stage direct/joint-io method that can handle the presence of algebraic loops. It is illustrated that the identified models have improved variance properties over instrumental variable estimation methods

    Prediction error identification with rank-reduced output noise

    No full text
    \u3cp\u3eIn data-driven modelling in dynamic networks, it is commonly assumed that all measured node variables in the network are noise-disturbed and that the network (vector) noise process is full rank. However when the scale of the network increases, this full rank assumption may not be considered as realistic, as noises on different node signals can be strongly correlated. In this paper it is analyzed how a prediction error method can deal with a noise disturbance whose dimension is strictly larger than the number of white noise signals than is required to generate it (rank-reduced noise). Based on maximum likelihood considerations, an appropriate prediction error identification criterion will be derived and consistency will be shown, while variance results will be demonstrated in a simulation example.\u3c/p\u3

    Single module identifiability in linear dynamic networks

    No full text
    \u3cp\u3eA recent development in data-driven modeling addresses the problem of identifying dynamic models of interconnected systems, represented as linear dynamic networks. For these networks the notion of network identifiability has been introduced recently, which reflects the property that different network models can be distinguished from each other. Network identifiability is extended to cover the uniqueness of a single module in the network model, and conditions for single module identifiability are derived and formulated in terms of path-based topological properties of the network models.\u3c/p\u3

    Identification of dynamic networks operating in the presence of algebraic loops

    Get PDF
    When identifying all modules in a dynamic network it is natural to treat all node variables in a symmetric way, i.e. not having pre-assigned roles of `inputs' and `outputs'. In a prediction error setting this implies that every node signal is predicted on the basis of all other nodes. A usual restriction in direct and joint-io methods for dynamic network and closed-loop identification is the need for a delay to be present in every loop (absence of algebraic loops). It is shown that the classical one-step-ahead predictor that incorporates direct feedt-hrough terms in models can not be used in a dynamic network setting. It has to be replaced by a network predictor, for which consistency results are shown when applied in a direct identification method. The result is a one-stage direct/joint-io method that can handle the presence of algebraic loops. It is illustrated that the identified models have improved variance properties over instrumental variable estimation methods

    From closed-loop identification to dynamic networks:Generalization of the direct method

    No full text
    \u3cp\u3eIdentification methods for identifying (modules in) dynamic cyclic networks, are typically based on the standard methods that are available for identification of dynamic systems in closed-loop. The commonly used direct method for closed-loop prediction error identification is one of the available tools. In this paper we are going to show the consequences when the direct method is used under conditions that are more general than the classical closed-loop case. We will do so by focusing on a simple two-node (feedback) network where we add additional disturbances, excitation signals and sensor noise. The direct method loses consistency when correlated disturbances are present on node signals, or when sensor noises are present. A generalization of the direct method, the joint-direct method, is explored, that is based on a vector predictor and includes a conditioning on external excitation signals. It is shown to be able to cope with the above situations, and to retain consistency of the module estimates.\u3c/p\u3
    corecore