1,541 research outputs found

    Dynamical Optimal Transport on Discrete Surfaces

    Full text link
    We propose a technique for interpolating between probability distributions on discrete surfaces, based on the theory of optimal transport. Unlike previous attempts that use linear programming, our method is based on a dynamical formulation of quadratic optimal transport proposed for flat domains by Benamou and Brenier [2000], adapted to discrete surfaces. Our structure-preserving construction yields a Riemannian metric on the (finite-dimensional) space of probability distributions on a discrete surface, which translates the so-called Otto calculus to discrete language. From a practical perspective, our technique provides a smooth interpolation between distributions on discrete surfaces with less diffusion than state-of-the-art algorithms involving entropic regularization. Beyond interpolation, we show how our discrete notion of optimal transport extends to other tasks, such as distribution-valued Dirichlet problems and time integration of gradient flows

    A geometric network model of intrinsic grey-matter connectivity of the human brain

    Get PDF
    Network science provides a general framework for analysing the large-scale brain networks that naturally arise from modern neuroimaging studies, and a key goal in theoretical neuro- science is to understand the extent to which these neural architectures influence the dynamical processes they sustain. To date, brain network modelling has largely been conducted at the macroscale level (i.e. white-matter tracts), despite growing evidence of the role that local grey matter architecture plays in a variety of brain disorders. Here, we present a new model of intrinsic grey matter connectivity of the human connectome. Importantly, the new model incorporates detailed information on cortical geometry to construct ‘shortcuts’ through the thickness of the cortex, thus enabling spatially distant brain regions, as measured along the cortical surface, to communicate. Our study indicates that structures based on human brain surface information differ significantly, both in terms of their topological network characteristics and activity propagation properties, when compared against a variety of alternative geometries and generative algorithms. In particular, this might help explain histological patterns of grey matter connectivity, highlighting that observed connection distances may have arisen to maximise information processing ability, and that such gains are consistent with (and enhanced by) the presence of short-cut connections

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field

    An ADM 3+1 formulation for Smooth Lattice General Relativity

    Get PDF
    A new hybrid scheme for numerical relativity will be presented. The scheme will employ a 3-dimensional spacelike lattice to record the 3-metric while using the standard 3+1 ADM equations to evolve the lattice. Each time step will involve three basic steps. First, the coordinate quantities such as the Riemann and extrinsic curvatures are extracted from the lattice. Second, the 3+1 ADM equations are used to evolve the coordinate data, and finally, the coordinate data is used to update the scalar data on the lattice (such as the leg lengths). The scheme will be presented only for the case of vacuum spacetime though there is no reason why it could not be extended to non-vacuum spacetimes. The scheme allows any choice for the lapse function and shift vectors. An example for the Kasner T3T^3 cosmology will be presented and it will be shown that the method has, for this simple example, zero discretisation error.Comment: 18 pages, plain TeX, 5 epsf figues, gzipped ps file also available at http://newton.maths.monash.edu.au:8000/preprints/3+1-slgr.ps.g

    Wire mesh design

    Get PDF
    We present a computational approach for designing wire meshes, i.e., freeform surfaces composed of woven wires arranged in a regular grid. To facilitate shape exploration, we map material properties of wire meshes to the geometric model of Chebyshev nets. This abstraction is exploited to build an efficient optimization scheme. While the theory of Chebyshev nets suggests a highly constrained design space, we show that allowing controlled deviations from the underlying surface provides a rich shape space for design exploration. Our algorithm balances globally coupled material constraints with aesthetic and geometric design objectives that can be specified by the user in an interactive design session. In addition to sculptural art, wire meshes represent an innovative medium for industrial applications including composite materials and architectural façades. We demonstrate the effectiveness of our approach using a variety of digital and physical prototypes with a level of shape complexity unobtainable using previous methods

    Generating 3D faces using Convolutional Mesh Autoencoders

    Full text link
    Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://github.com/anuragranj/com
    • …
    corecore