5,705 research outputs found

    On algebraic time-derivative estimation and deadbeat state reconstruction

    Get PDF
    This note places into perspective the so-called algebraic time-derivative estimation method recently introduced by Fliess and co-authors with standard results from linear state-space theory for control systems. In particular, it is shown that the algebraic method can in a sense be seen as a special case of deadbeat state estimation based on the reconstructibility Gramian of the considered system.Comment: Maple-supplements available at https://www.tu-ilmenau.de/regelungstechnik/mitarbeiter/johann-reger

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Identifiability of large nonlinear biochemical networks

    Get PDF
    Dynamic models formulated as a set of ordinary differential equations provide a detailed description of the time-evolution of a system. Such models of (bio)chemical reaction networks have contributed to important advances in biotechnology and biomedical applications, and their impact is foreseen to increase in the near future. Hence, the task of dynamic model building has attracted much attention from scientists working at the intersection of biochemistry, systems theory, mathematics, and computer science, among other disciplines-an area sometimes called systems biology. Before a model can be effectively used, the values of its unknown parameters have to be estimated from experimental data. A necessary condition for parameter estimation is identifiability, the property that, for a certain output, there exists a unique (or finite) set of parameter values that produces it. Identifiability can be analysed from two complementary points of view: structural (which searches for symmetries in the model equations that may prevent parameters from being uniquely determined) or practical (which focuses on the limitations introduced by the quantity and quality of the data available for parameter estimation). Both types of analyses are often difficult for nonlinear models, and their complexity increases rapidly with the problem size. Hence, assessing the identifiability of realistic dynamic models of biochemical networks remains a challenging task. Despite the fact that many methods have been developed for this purpose, it is still an open problem and an active area of research. Here we review the theory and tools available for the study of identifiability, and discuss some closely related concepts such as sensitivity to parameter perturbations, observability, distinguishability, and optimal experimental design, among others.This work was funded by the Galician government (Xunta de Galiza) through the I2C postdoctoral program (fellowship ED481B2014/133-0), and by the Spanish Ministry of Economy and Competitiveness (grant DPI2013-47100-C2-2-P)
    • …
    corecore