16 research outputs found

    Book Reviews

    Get PDF
    The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments we show how such a hyperspherical VAE, or S\mathcal{S}-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, N\mathcal{N}-VAE, in low dimensions on other data types.Comment: GitHub repository: http://github.com/nicola-decao/s-vae-tf, Blogpost: https://nicola-decao.github.io/s-va

    An AT-barrier mechanically controls DNA reannealing under tension

    Get PDF
    Regulation of genomic activity occurs through the manipulation of DNA by competent mechanoenzymes. Force-clamp optical tweezers that allow the structural dynamics of the DNA molecule to be measured were used here to investigate the kinetics of mechanically-driven strand reannealing. When the force on the torsionally unconstrained lambda-phage DNA is decreased stepwise from above to below the overstretching transition, reannealing occurs via discrete shortening steps separated by exponentially distributed time intervals. Kinetic analysis reveals a transition barrier 0.58 nm along the reaction coordinate and an average reannealing-step size of approximately 750 bp, consistent with the average bp interval separating segments of more than 10 consecutive AT bases. In an AT-rich DNA construct, in which the distance between segments of more than 10 consecutive AT is reduced to approximately 210 bps, the reannealing step reduces accordingly without changes in the position of the transition barrier. Thus, the transition barrier for reannealing is determined by the presence of segments of more than 10 consecutive AT bps independent of changes in sequence composition, while the length of the reannealing strand changes according to the distance between poly-AT segments at least 10 bps long

    Reparameterizing Distributions on Lie Groups

    No full text
    Reparameterizable densities are an important way to learn probability distributions in a deep learning setting. For many distributions it is possible to create low-variance gradient estimators by utilizing a `reparameterization trick'. Due to the absence of a general reparameterization trick, much research has recently been devoted to extend the number of reparameterizable distributional families. Unfortunately, this research has primarily focused on distributions defined in Euclidean space, ruling out the usage of one of the most influential class of spaces with non-trivial topologies: Lie groups. In this work we define a general framework to create reparameterizable densities on arbitrary Lie groups, and provide a detailed practitioners guide to further the ease of usage. We demonstrate how to create complex and multimodal distributions on the well known oriented group of 3D rotations, SO(3), using normalizing flows. Our experiments on applying such distributions in a Bayesian setting for pose estimation on objects with discrete and continuous symmetries, showcase their necessity in achieving realistic uncertainty estimates

    Reparameterizing Distributions on Lie Groups

    Get PDF
    Reparameterizable densities are an important way to learn probability distributions in a deep learning setting. For many distributions it is possible to create low-variance gradient estimators by utilizing a `reparameterization trick'. Due to the absence of a general reparameterization trick, much research has recently been devoted to extend the number of reparameterizable distributional families. Unfortunately, this research has primarily focused on distributions defined in Euclidean space, ruling out the usage of one of the most influential class of spaces with non-trivial topologies: Lie groups. In this work we define a general framework to create reparameterizable densities on arbitrary Lie groups, and provide a detailed practitioners guide to further the ease of usage. We demonstrate how to create complex and multimodal distributions on the well known oriented group of 3D rotations, SO(3), using normalizing flows. Our experiments on applying such distributions in a Bayesian setting for pose estimation on objects with discrete and continuous symmetries, showcase their necessity in achieving realistic uncertainty estimates

    Hyperspherical Variational Auto-Encoders

    No full text
    The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments, we show how such a hyperspherical VAE, or S-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, N-VAE, in low dimensions on other data types

    Explorations in Homeomorphic Variational Auto-Encoding

    Get PDF
    The manifold hypothesis states that many kinds of high-dimensional data are concentrated near a low-dimensional manifold. If the topology of this data manifold is non-trivial, a continuous en-coder network cannot embed it in a one-to-one manner without creating holes of low density in the latent space. This is at odds with the Gaussian prior assumption typically made in Variational Auto-Encoders (VAEs), because the density of a Gaussian concentrates near a blob-like manifold. In this paper we investigate the use of manifold-valued latent variables. Specifically, we focus on the important case of continuously differen-tiable symmetry groups (Lie groups), such as the group of 3D rotations SO(3). We show how a VAE with SO(3)-valued latent variables can be constructed, by extending the reparameterization trick to compact connected Lie groups. Our exper-iments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space
    corecore