582 research outputs found

    Randomized Dynamic Mode Decomposition

    Full text link
    This paper presents a randomized algorithm for computing the near-optimal low-rank dynamic mode decomposition (DMD). Randomized algorithms are emerging techniques to compute low-rank matrix approximations at a fraction of the cost of deterministic algorithms, easing the computational challenges arising in the area of `big data'. The idea is to derive a small matrix from the high-dimensional data, which is then used to efficiently compute the dynamic modes and eigenvalues. The algorithm is presented in a modular probabilistic framework, and the approximation quality can be controlled via oversampling and power iterations. The effectiveness of the resulting randomized DMD algorithm is demonstrated on several benchmark examples of increasing complexity, providing an accurate and efficient approach to extract spatiotemporal coherent structures from big data in a framework that scales with the intrinsic rank of the data, rather than the ambient measurement dimension. For this work we assume that the dynamics of the problem under consideration is evolving on a low-dimensional subspace that is well characterized by a fast decaying singular value spectrum

    Nonlinear model order reduction via dynamic mode decomposition

    Get PDF
    We propose a new technique for obtaining reduced order models for nonlinear dynamical systems. Specically, we advocate the use of the recently developed dynamic mode decomposition (DMD), an equation-free method, to approximate the nonlinear term. DMD is a spatio-temporal matrix decomposition of a data matrix that correlates spatial features while simul-taneously associating the activity with periodic temporal behavior. With this decomposition, one can obtain a fully reduced dimensional surrogate model and avoid the evaluation of the nonlinear term in the online stage. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem. We present a suite of numerical tests to illustrate our approach and to show the e ectiveness of the method in comparison to existing approaches

    Randomized model order reduction

    Get PDF
    The singular value decomposition (SVD) has a crucial role in model order reduction. It is often utilized in the offline stage to compute basis functions that project the high-dimensional nonlinear problem into a low-dimensional model which is then evaluated cheaply. It constitutes a building block for many techniques such as the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). The aim of this work is to provide an efficient computation of low-rank POD and/or DMD modes via randomized matrix decompositions. This is possible due to the randomized singular value decomposition (rSVD) which is a fast and accurate alternative of the SVD. Although this is considered an offline stage, this computation may be extremely expensive; therefore, the use of compressed techniques drastically reduce its cost. Numerical examples show the effectiveness of the method for both POD and DMD

    Preventing Neurodegenerative Memory Loss in Hopfield Neuronal Networks Using Cerebral Organoids or External Microelectronics

    Get PDF
    Developing technologies have made significant progress towards linking the brain with brain-machine interfaces (BMIs) which have the potential to aid damaged brains to perform their original motor and cognitive functions. We consider the viability of such devices for mitigating the deleterious effects of memory loss that is induced by neurodegenerative diseases and/or traumatic brain injury (TBI). Our computational study considers the widely used Hopfield network, an autoassociative memory model in which neurons converge to a stable state pattern after receiving an input resembling the given memory. In this study, we connect an auxiliary network of neurons, which models the BMI device, to the original Hopfield network and train it to converge to its own auxiliary memory patterns. Injuries to the original Hopfield memory network, induced through neurodegeneration, for instance, can then be analyzed with the goal of evaluating the ability of the BMI to aid in memory retrieval tasks. Dense connectivity between the auxiliary and Hopfield networks is shown to promote robustness of memory retrieval tasks for both optimal and nonoptimal memory sets. Our computations estimate damage levels and parameter ranges for which full or partial memory recovery is achievable, providing a starting point for novel therapeutic strategies

    Data-driven identification of parametric partial differential equations

    Get PDF
    In this work we present a data-driven method for the discovery of parametric partial differential equations (PDEs), thus allowing one to disambiguate between the underlying evolution equations and their parametric dependencies. Group sparsity is used to ensure parsimonious representations of observed dynamics in the form of a parametric PDE, while also allowing the coefficients to have arbitrary time series, or spatial dependence. This work builds on previous methods for the identification of constant coefficient PDEs, expanding the field to include a new class of equations which until now have eluded machine learning based identification methods. We show that group sequentially thresholded ridge regression outperforms group LASSO in identifying the fewest terms in the PDE along with their parametric dependency. The method is demonstrated on four canonical models with and without the introduction of noise

    Randomized Matrix Decompositions Using R

    Get PDF
    Matrix decompositions are fundamental tools in the area of applied mathematics, statistical computing, and machine learning. In particular, low-rank matrix decompositions are vital, and widely used for data analysis, dimensionality reduction, and data compression. Massive datasets, however, pose a computational challenge for traditional algorithms, placing significant constraints on both memory and processing power. Recently, the powerful concept of randomness has been introduced as a strategy to ease the computational load. The essential idea of probabilistic algorithms is to employ some amount of randomness in order to derive a smaller matrix from a high-dimensional data matrix. The smaller matrix is then used to compute the desired low-rank approximation. Such algorithms are shown to be computationally efficient for approximating matrices with low-rank structure. We present the R package rsvd, and provide a tutorial introduction to randomized matrix decompositions. Specifically, randomized routines for the singular value decomposition, (robust) principal component analysis, interpolative decomposition, and CUR decomposition are discussed. Several examples demonstrate the routines, and show the computational advantage over other methods implemented in R
    corecore