606 research outputs found

    The Toda lattice is super-integrable

    Full text link
    We prove that the classical, non-periodic Toda lattice is super-integrable. In other words, we show that it possesses 2N-1 independent constants of motion, where N is the number of degrees of freedom. The main ingredient of the proof is the use of some special action--angle coordinates introduced by Moser to solve the equations of motion.Comment: 8 page

    Poisson brackets with prescribed Casimirs

    Full text link
    We consider the problem of constructing Poisson brackets on smooth manifolds MM with prescribed Casimir functions. If MM is of even dimension, we achieve our construction by considering a suitable almost symplectic structure on MM, while, in the case where MM is of odd dimension, our objective is achieved by using a convenient almost cosymplectic structure. Several examples and applications are presented.Comment: 24 page

    From the Toda Lattice to the Volterra lattice and back

    Full text link
    We discuss the relationship between the multiple Hamiltonian structures of the generalized Toda lattices and that of the generalized Volterra lattices. We use a symmtery approach for Poisson structures that generalizes the Poisson involution theorem.Comment: 15 pages; Final version to appear in Reports on Math. Phy

    A Gentle (without Chopping) Approach to the Full Kostant-Toda Lattice

    Full text link
    In this paper we propose a new algorithm for obtaining the rational integrals of the full Kostant-Toda lattice. This new approach is based on a reduction of a bi-Hamiltonian system on gl(n,R). This system was obtained by reducing the space of maps from Z_n to GL(n,R) endowed with a structure of a pair of Lie-algebroids.Comment: Published in SIGMA (Symmetry, Integrability and Geometry: Methods and Applications) at http://www.emis.de/journals/SIGMA

    Deep Gaussian Processes

    Full text link
    In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.Comment: 9 pages, 8 figures. Appearing in Proceedings of the 16th International Conference on Artificial Intelligence and Statistics (AISTATS) 201
    • ‚Ķ
    corecore