501,875 research outputs found

    An Attention-based Collaboration Framework for Multi-View Network Representation Learning

    Full text link
    Learning distributed node representations in networks has been attracting increasing attention recently due to its effectiveness in a variety of applications. Existing approaches usually study networks with a single type of proximity between nodes, which defines a single view of a network. However, in reality there usually exists multiple types of proximities between nodes, yielding networks with multiple views. This paper studies learning node representations for networks with multiple views, which aims to infer robust node representations across different views. We propose a multi-view representation learning approach, which promotes the collaboration of different views and lets them vote for the robust representations. During the voting process, an attention mechanism is introduced, which enables each node to focus on the most informative views. Experimental results on real-world networks show that the proposed approach outperforms existing state-of-the-art approaches for network representation learning with a single view and other competitive approaches with multiple views.Comment: CIKM 201

    Exponential Networks and Representations of Quivers

    Full text link
    We study the geometric description of BPS states in supersymmetric theories with eight supercharges in terms of geodesic networks on suitable spectral curves. We lift and extend several constructions of Gaiotto-Moore-Neitzke from gauge theory to local Calabi-Yau threefolds and related models. The differential is multi-valued on the covering curve and features a new type of logarithmic singularity in order to account for D0-branes and non-compact D4-branes, respectively. We describe local rules for the three-way junctions of BPS trajectories relative to a particular framing of the curve. We reproduce BPS quivers of local geometries and illustrate the wall-crossing of finite-mass bound states in several new examples. We describe first steps toward understanding the spectrum of framed BPS states in terms of such "exponential networks."Comment: 82 pages, 60 figures, typos fixe

    Probabilistic Meta-Representations Of Neural Networks

    Full text link
    Existing Bayesian treatments of neural networks are typically characterized by weak prior and approximate posterior distributions according to which all the weights are drawn independently. Here, we consider a richer prior distribution in which units in the network are represented by latent variables, and the weights between units are drawn conditionally on the values of the collection of those variables. This allows rich correlations between related weights, and can be seen as realizing a function prior with a Bayesian complexity regularizer ensuring simple solutions. We illustrate the resulting meta-representations and representations, elucidating the power of this prior.Comment: presented at UAI 2018 Uncertainty In Deep Learning Workshop (UDL AUG. 2018

    Inducing Language Networks from Continuous Space Word Representations

    Full text link
    Recent advancements in unsupervised feature learning have developed powerful latent representations of words. However, it is still not clear what makes one representation better than another and how we can learn the ideal representation. Understanding the structure of latent spaces attained is key to any future advancement in unsupervised learning. In this work, we introduce a new view of continuous space word representations as language networks. We explore two techniques to create language networks from learned features by inducing them for two popular word representation methods and examining the properties of their resulting networks. We find that the induced networks differ from other methods of creating language networks, and that they contain meaningful community structure.Comment: 14 page

    Learning flexible representations of stochastic processes on graphs

    Full text link
    Graph convolutional networks adapt the architecture of convolutional neural networks to learn rich representations of data supported on arbitrary graphs by replacing the convolution operations of convolutional neural networks with graph-dependent linear operations. However, these graph-dependent linear operations are developed for scalar functions supported on undirected graphs. We propose a class of linear operations for stochastic (time-varying) processes on directed (or undirected) graphs to be used in graph convolutional networks. We propose a parameterization of such linear operations using functional calculus to achieve arbitrarily low learning complexity. The proposed approach is shown to model richer behaviors and display greater flexibility in learning representations than product graph methods
    corecore