2,197 research outputs found

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Advances in Spectral Learning with Applications to Text Analysis and Brain Imaging

    Get PDF
    Spectral learning algorithms are becoming increasingly popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct. Following this line of research, we make two contributions. First, we propose a set of spectral algorithms for text analysis and natural language processing. In particular, we propose fast and scalable spectral algorithms for learning word embeddings -- low dimensional real vectors (called Eigenwords) that capture the “meaning” of words from their context. Second, we show how similar spectral methods can be applied to analyzing brain images. State-of-the-art approaches to learning word embeddings are slow to train or lack theoretical grounding; We propose three spectral algorithms that overcome these limitations. All three algorithms harness the multi-view nature of text data i.e. the left and right context of each word, and share three characteristics: 1). They are fast to train and are scalable. 2). They have strong theoretical properties. 3). They can induce context-specific embeddings i.e. different embedding for “river bank” or “Bank of America”. \end{enumerate} They also have lower sample complexity and hence higher statistical power for rare words. We provide theory which establishes relationships between these algorithms and optimality criteria for the estimates they provide. We also perform thorough qualitative and quantitative evaluation of Eigenwords and demonstrate their superior performance over state-of-the-art approaches. Next, we turn to the task of using spectral learning methods for brain imaging data. Methods like Sparse Principal Component Analysis (SPCA), Non-negative Matrix Factorization (NMF) and Independent Component Analysis (ICA) have been used to obtain state-of-the-art accuracies in a variety of problems in machine learning. However, their usage in brain imaging, though increasing, is limited by the fact that they are used as out-of-the-box techniques and are seldom tailored to the domain specific constraints and knowledge pertaining to medical imaging, which leads to difficulties in interpretation of results. In order to address the above shortcomings, we propose Eigenanatomy (EANAT), a general framework for sparse matrix factorization. Its goal is to statistically learn the boundaries of and connections between brain regions by weighing both the data and prior neuroanatomical knowledge. Although EANAT incorporates some neuroanatomical prior knowledge in the form of connectedness and smoothness constraints, it can still be difficult for clinicians to interpret the results in specific domains where network-specific hypotheses exist. We thus extend EANAT and present a novel framework for prior-constrained sparse decomposition of matrices derived from brain imaging data, called Prior Based Eigenanatomy (p-Eigen). We formulate our solution in terms of a prior-constrained l1 penalized (sparse) principal component analysis. Experimental evaluation confirms that p-Eigen extracts biologically-relevant, patient-specific functional parcels and that it significantly aids classification of Mild Cognitive Impairment when compared to state-of-the-art competing approaches

    Formal Models of the Network Co-occurrence Underlying Mental Operations

    Get PDF
    International audienceSystems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-uncon-strained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition

    Extended Dynamic Mode Decomposition with Learned Koopman Eigenfunctions for Prediction and Control

    Get PDF
    This paper presents a novel learning framework to construct Koopman eigenfunctions for unknown, nonlinear dynamics using data gathered from experiments. The learning framework can extract spectral information from the full non-linear dynamics by learning the eigenvalues and eigenfunctions of the associated Koopman operator. We then exploit the learned Koopman eigenfunctions to learn a lifted linear state-space model. To the best of our knowledge, our method is the first to utilize Koopman eigenfunctions as lifting functions for EDMD-based methods. We demonstrate the performance of the framework in state prediction and closed loop trajectory tracking of a simulated cart pole system. Our method is able to significantly improve the controller performance while relying on linear control methods to do nonlinear control

    Extended Dynamic Mode Decomposition with Learned Koopman Eigenfunctions for Prediction and Control

    Get PDF
    This paper presents a novel learning framework to construct Koopman eigenfunctions for unknown, nonlinear dynamics using data gathered from experiments. The learning framework can extract spectral information from the full non-linear dynamics by learning the eigenvalues and eigenfunctions of the associated Koopman operator. We then exploit the learned Koopman eigenfunctions to learn a lifted linear state-space model. To the best of our knowledge, our method is the first to utilize Koopman eigenfunctions as lifting functions for EDMD-based methods. We demonstrate the performance of the framework in state prediction and closed loop trajectory tracking of a simulated cart pole system. Our method is able to significantly improve the controller performance while relying on linear control methods to do nonlinear control

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Extended Dynamic Mode Decomposition with Learned Koopman Eigenfunctions for Prediction and Control

    Get PDF
    This paper presents a novel learning framework to construct Koopman eigenfunctions for unknown, nonlinear dynamics using data gathered from experiments. The learning framework can extract spectral information from the full nonlinear dynamics by learning the eigenvalues and eigenfunctions of the associated Koopman operator. We then exploit the learned Koopman eigenfunctions to learn a lifted linear state-space model. To the best of our knowledge, our method is the first to utilize Koopman eigenfunctions as lifting functions for EDMD-based methods. We demonstrate the performance of the framework in state prediction and closed loop trajectory tracking of a simulated cart pole system. Our method is able to significantly improve the controller performance while relying on linear control methods to do nonlinear control.Comment: 2020 American Control Conferenc
    corecore