1,755 research outputs found

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Quantum computing for finance

    Full text link
    Quantum computers are expected to surpass the computational capabilities of classical computers and have a transformative impact on numerous industry sectors. We present a comprehensive summary of the state of the art of quantum computing for financial applications, with particular emphasis on stochastic modeling, optimization, and machine learning. This Review is aimed at physicists, so it outlines the classical techniques used by the financial industry and discusses the potential advantages and limitations of quantum techniques. Finally, we look at the challenges that physicists could help tackle

    Learning neural ordinary differential equations for optimal control

    Full text link
    Ce mémoire rassemble des éléments d'optimisation, d'apprentissage profond et de contrôle optimal afin de répondre aux problématiques d'apprentissage et de planification dans le contexte des systèmes dynamiques en temps continu. Deux approches générales sont explorées. D'abord, une approche basée sur la méthode du maximum de vraisemblance est présentée. Ici, les trajectoires ``d'entrainement'' sont échantillonnées depuis la dynamique réelle, et à partir de celles-ci un modèle de prédiction des états observés est appris. Une fois que l'apprentissage est terminé, le modèle est utilisé pour la planification, en utilisant la dynamique de l'environnement et une fonction de coût pour construire un programme non linéaire, qui est par la suite résolu pour trouver une séquence de contrôle optimal. Ensuite, une approche de bout en bout est proposée, dans laquelle la tâche d'apprentissage de modèle dynamique et celle de planification se déroulent simultanément. Ceci est illustré dans le cadre d'un problème d'apprentissage par imitation, où le modèle est mis à jour en rétropropageant le signal de perte à travers l'algorithme de planification. Grâce au fait que l'entrainement est effectué de bout en bout, cette technique pourrait constituer un sous-module de réseau de neurones de plus grande taille, et pourrait être utilisée pour fournir un biais inductif en faveur des comportements optimaux dans le contexte de systèmes dynamiques en temps continu. Ces méthodes sont toutes les deux conçues pour fonctionner avec des modèles d'équations différentielles ordinaires paramétriques et neuronaux. Également, inspiré par des applications réelles pertinentes, un large recueil de systèmes dynamiques et d'optimiseurs de trajectoire, nommé Myriad, est implémenté; les algorithmes sont testés et comparés sur une variété de domaines de la suite Myriad.This thesis brings together elements of optimization, deep learning and optimal control to study the challenge of learning and planning in continuous-time dynamical systems. Two general approaches are explored. First, a maximum likelihood approach is presented, in which training trajectories are sampled from the true dynamics, and a model is learned to accurately predict the state observations. After training is completed, the learned model is then used for planning, by using the dynamics and cost function to construct a nonlinear program, which can be solved to find a sequence of optimal controls. Second, a fully end-to-end approach is proposed, in which the tasks of model learning and planning are performed simultaneously. This is demonstrated in an imitation learning setting, in which the model is updated by backpropagating the loss signal through the planning algorithm itself. Importantly, because it can be trained in an end-to-end fashion, this technique can be included as a sub-module of a larger neural network, and used to provide an inductive bias towards behaving optimally in a continuous-time dynamical system. Both the maximum likelihood and end-to-end methods are designed to work with parametric and neural ordinary differential equation models. Inspired by relevant real-world applications, a large repository of dynamical systems and trajectory optimizers, named Myriad, is also implemented. The algorithms are tested and compared on a variety of domains within the Myriad suite

    Quantum Machine Learning in High Energy Physics

    Full text link
    Machine learning has been used in high energy physics since a long time, primarily at the analysis level with supervised classification. Quantum computing was postulated in the early 1980s as way to perform computations that would not be tractable with a classical computer. With the advent of noisy intermediate-scale quantum computing devices, more quantum algorithms are being developed with the aim at exploiting the capacity of the hardware for machine learning applications. An interesting question is whether there are ways to combine quantum machine learning with High Energy Physics. This paper reviews the first generation of ideas that use quantum machine learning on problems in high energy physics and provide an outlook on future applications.Comment: 25 pages, 9 figures, submitted to Machine Learning: Science and Technology, Focus on Machine Learning for Fundamental Physics collectio

    Loss Scaling and Step Size in Deep Learning Optimizatio

    Get PDF
    Deep learning training consumes ever-increasing time and resources, and that isdue to the complexity of the model, the number of updates taken to reach goodresults, and both the amount and dimensionality of the data. In this dissertation,we will focus on making the process of training more efficient by focusing on thestep size to reduce the number of computations for parameters in each update.We achieved our objective in two new ways: we use loss scaling as a proxy forthe learning rate, and we use learnable layer-wise optimizers. Although our workis perhaps not the first to point to the equivalence of loss scaling and learningrate in deep learning optimization, ours is the first to leveraging this relationshiptowards more efficient training. We did not only use it in simple gradient descent,but also we were able to extend it to other adaptive algorithms. Finally, we usemetalearning to shed light on relevant aspects, including learnable lossesand optimizers. In this regard, we developed a novel learnable optimizer andeffectively utilized it to acquire an adaptive rescaling factor and learning rate,resulting in a significant reduction in required memory during training

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Strawberry Fields: A Software Platform for Photonic Quantum Computing

    Get PDF
    We introduce Strawberry Fields, an open-source quantum programming architecture for light-based quantum computers, and detail its key features. Built in Python, Strawberry Fields is a full-stack library for design, simulation, optimization, and quantum machine learning of continuous-variable circuits. The platform consists of three main components: (i) an API for quantum programming based on an easy-to-use language named Blackbird; (ii) a suite of three virtual quantum computer backends, built in NumPy and TensorFlow, each targeting specialized uses; and (iii) an engine which can compile Blackbird programs on various backends, including the three built-in simulators, and -- in the near future -- photonic quantum information processors. The library also contains examples of several paradigmatic algorithms, including teleportation, (Gaussian) boson sampling, instantaneous quantum polynomial, Hamiltonian simulation, and variational quantum circuit optimization.Comment: Try the Strawberry Fields Interactive website, located at http://strawberryfields.ai . Source code available at https://github.com/XanaduAI/strawberryfields. Accepted in Quantu
    corecore