184 research outputs found

    Covariant Variational Evolution and Jacobi Brackets: Fields

    Full text link
    The analysis of the covariant brackets on the space of functions on the solutions to a variational problem in the framework of contact geometry initiated in the companion letter Ref.19 is extended to the case of the multisymplectic formulation of the free Klein-Gordon theory and of the free Schr\"{o}dinger equation.Comment: 16 page

    Geometry from divergence functions and complex structures

    Full text link
    Motivated by the geometrical structures of quantum mechanics, we introduce an almost-complex structure JJ on the product M×MM\times M of any parallelizable statistical manifold MM. Then, we use JJ to extract a pre-symplectic form and a metric-like tensor on M×MM\times M from a divergence function. These tensors may be pulled back to MM, and we compute them in the case of an N-dimensional symplex with respect to the Kullback-Leibler relative entropy, and in the case of (a suitable unfolding space of) the manifold of faithful density operators with respect to the von Neumann-Umegaki relative entropy.Comment: 19 pages, comments are welcome

    GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based Histogram Intersection

    Full text link
    Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning. In this paper, we propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features. To this end, we extract the distribution of features in the egonet for each local neighbourhood and compare them against a set of learned label distributions by taking the histogram intersection kernel. The similarity information is then propagated to other nodes in the network, effectively creating a message passing-like mechanism where the message is determined by the ensemble of the features. We perform an ablation study to evaluate the network's performance under different choices of its hyper-parameters. Finally, we test our model on standard graph classification and regression benchmarks, and we find that it outperforms widely used alternative approaches, including both graph kernels and graph neural networks

    Lagrangian description of Heisenberg and Landau-von Neumann equations of motion

    Full text link
    An explicit Lagrangian description is given for the Heisenberg equation on the algebra of operators of a quantum system, and for the Landau-von Neumann equation on the manifold of quantum states which are isospectral with respect to a fixed reference quantum state.Comment: 13 page

    Learning disentangled representations via product manifold projection

    Full text link
    We propose a novel approach to disentangle the generative factors of variation underlying a given set of observations. Our method builds upon the idea that the (unknown) low-dimensional manifold underlying the data space can be explicitly modeled as a product of submanifolds. This definition of disentanglement gives rise to a novel weakly-supervised algorithm for recovering the unknown explanatory factors behind the data. At training time, our algorithm only requires pairs of non i.i.d. data samples whose elements share at least one, possibly multidimensional, generative factor of variation. We require no knowledge on the nature of these transformations, and do not make any limiting assumption on the properties of each subspace. Our approach is easy to implement, and can be successfully applied to different kinds of data (from images to 3D surfaces) undergoing arbitrary transformations. In addition to standard synthetic benchmarks, we showcase our method in challenging real-world applications, where we compare favorably with the state of the art.Comment: 15 pages, 10 figure

    Shape Registration in the Time of Transformers

    Get PDF
    In this paper, we propose a transformer-based procedure for the efficient registration of non-rigid 3D point clouds. The proposed approach is data-driven and adopts for the first time the transformers architecture in the registration task. Our method is general and applies to different settings. Given a fixed template with some desired properties (e.g. skinning weights or other animation cues), we can register raw acquired data to it, thereby transferring all the template properties to the input geometry. Alternatively, given a pair of shapes, our method can register the first onto the second (or vice-versa), obtaining a high-quality dense correspondence between the two. In both contexts, the quality of our results enables us to target real applications such as texture transfer and shape interpolation. Furthermore, we also show that including an estimation of the underlying density of the surface eases the learning process. By exploiting the potential of this architecture, we can train our model requiring only a sparse set of ground truth correspondences (10∼20% of the total points). The proposed model and the analysis that we perform pave the way for future exploration of transformer-based architectures for registration and matching applications. Qualitative and quantitative evaluations demonstrate that our pipeline outperforms state-of-the-art methods for deformable and unordered 3D data registration on different datasets and scenarios

    Differentiable Graph Module (DGM) for Graph Convolutional Networks

    Full text link
    Graph deep learning has recently emerged as a powerful ML concept allowing to generalize successful deep neural architectures to non-Euclidean structured data. Such methods have shown promising results on a broad spectrum of applications ranging from social science, biomedicine, and particle physics to computer vision, graphics, and chemistry. One of the limitations of the majority of the current graph neural network architectures is that they are often restricted to the transductive setting and rely on the assumption that the underlying graph is known and fixed. In many settings, such as those arising in medical and healthcare applications, this assumption is not necessarily true since the graph may be noisy, partially- or even completely unknown, and one is thus interested in inferring it from the data. This is especially important in inductive settings when dealing with nodes not present in the graph at training time. Furthermore, sometimes such a graph itself may convey insights that are even more important than the downstream task. In this paper, we introduce Differentiable Graph Module (DGM), a learnable function predicting the edge probability in the graph relevant for the task, that can be combined with convolutional graph neural network layers and trained in an end-to-end fashion. We provide an extensive evaluation of applications from the domains of healthcare (disease prediction), brain imaging (gender and age prediction), computer graphics (3D point cloud segmentation), and computer vision (zero-shot learning). We show that our model provides a significant improvement over baselines both in transductive and inductive settings and achieves state-of-the-art results
    corecore