47 research outputs found
Quantum Natural Gradient for Variational Bayes
Variational Bayes (VB) is a critical method in machine learning and
statistics, underpinning the recent success of Bayesian deep learning. The
natural gradient is an essential component of efficient VB estimation, but it
is prohibitively computationally expensive in high dimensions. We propose a
hybrid quantum-classical algorithm to improve the scaling properties of natural
gradient computation and make VB a truly computationally efficient method for
Bayesian inference in highdimensional settings. The algorithm leverages matrix
inversion from the linear systems algorithm by Harrow, Hassidim, and Lloyd
[Phys. Rev. Lett. 103, 15 (2009)] (HHL). We demonstrate that the matrix to be
inverted is sparse and the classical-quantum-classical handoffs are
sufficiently economical to preserve computational efficiency, making the
problem of natural gradient for VB an ideal application of HHL. We prove that,
under standard conditions, the VB algorithm with quantum natural gradient is
guaranteed to converge. Our regression-based natural gradient formulation is
also highly useful for classical VB
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Recommended from our members
Exotic vortices in superfluids and matrix product states for quantum optimization and machine learning
The interest in vortices and vortex lattices was sparked by the prediction of quantisation of circulation by Onsager in the 1940s. The field has since developed dramatically and attracted a lot of interest across the physics community. In this dissertation we study vortices in two different systems: a rotating, Rabi-coupled, two-component Bose--Einstein condensate (BEC) and a rotating spinor-BEC, in two spatial dimensions.
Vortex molecules can form in a two-component superfluid when a Rabi field drives transitions between the
two components. We study the ground state of an infinite system of vortex molecules in two dimensions, using
a numerical scheme which makes no use of the lowest Landau level approximation.
We find the ground state lattice geometry for different values of intercomponent interactions and strength of the Rabi field. In the limit of large field, when molecules are tightly bound, we develop a complementary analytical description. The energy
governing the alignment of molecules on a triangular lattice is found to correspond to that of an infinite system of
two-dimensional quadrupoles, which may be written in terms of an elliptic function . This allows for a numerical evaluation of the energy enabling us to find the ground state configuration of the molecules.
In the phase of a two-component BEC, in which the spin density is zero, the emergent of the order parameter allows for the presence of half-quantum vortices (HQVs). We numerically search for this object in the variational ground state of a spinor-BEC and find it in certain region of the phase diagram. We provide analytical arguments that suggest that this object is energetically favorable in the ground state.
Matrix product state (MPS) based methods are currently regarded as one of the most powerful tools to study the low-energy physics of one-dimensional many-body quantum systems. In this work we find a connection between MPS in the left canonical form and the Stiefel manifold. This relation allows us to constrain the optimisation to this subspace of the otherwise larger MPS manifold. We find that our method suffers from two undesirable features. First, the need of a large unit cell to achieve machine precision. Second, because of the presence of the power method in the variational energy expression, it is possible for the convergence process to get stuck in regions of the Stiefel manifold where the modulus of the second largest eigenvalue of the transfer matrix is very close to one.
Since the foundation of the field of (AI) in 1956, at a workshop held in Dartmouth College (New Hampshire, US), the excitement and optimism towards it has oscillated throughout the years. The last AI boom started in 2012 and we live in a time where people from disciplines, both in industry and academia, are getting involved in machine learning. We contribute to the field with a generative model for raw audio. Our model is based on continuous matrix product states and it takes the form of a , describing the continuous time measurement of a quantum system. We test our model on three different synthetic datasets and we find its performance promising
New Directions for Contact Integrators
Contact integrators are a family of geometric numerical schemes which
guarantee the conservation of the contact structure. In this work we review the
construction of both the variational and Hamiltonian versions of these methods.
We illustrate some of the advantages of geometric integration in the
dissipative setting by focusing on models inspired by recent studies in
celestial mechanics and cosmology.Comment: To appear as Chapter 24 in GSI 2021, Springer LNCS 1282