14,295 research outputs found
Spectral goodness of fit for network models
We introduce a new statistic, 'spectral goodness of fit' (SGOF) to measure
how well a network model explains the structure of an observed network. SGOF
provides an absolute measure of fit, analogous to the standard R-squared in
linear regression. Additionally, as it takes advantage of the properties of the
spectrum of the graph Laplacian, it is suitable for comparing network models of
diverse functional forms, including both fitted statistical models and
algorithmic generative models of networks. After introducing, defining, and
providing guidance for interpreting SGOF, we illustrate the properties of the
statistic with a number of examples and comparisons to existing techniques. We
show that such a spectral approach to assessing model fit fills gaps left by
earlier methods and can be widely applied
Encoding Robust Representation for Graph Generation
Generative networks have made it possible to generate meaningful signals such
as images and texts from simple noise. Recently, generative methods based on
GAN and VAE were developed for graphs and graph signals. However, the
mathematical properties of these methods are unclear, and training good
generative models is difficult. This work proposes a graph generation model
that uses a recent adaptation of Mallat's scattering transform to graphs. The
proposed model is naturally composed of an encoder and a decoder. The encoder
is a Gaussianized graph scattering transform, which is robust to signal and
graph manipulation. The decoder is a simple fully connected network that is
adapted to specific tasks, such as link prediction, signal generation on graphs
and full graph and signal generation. The training of our proposed system is
efficient since it is only applied to the decoder and the hardware requirements
are moderate. Numerical results demonstrate state-of-the-art performance of the
proposed system for both link prediction and graph and signal generation.Comment: 9 pages, 7 figures, 6 table
LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation
This paper presents LightLM, a lightweight Transformer-based language model
for generative recommendation. While Transformer-based generative modeling has
gained importance in various AI sub-fields such as NLP and vision, generative
recommendation is still in its infancy due to its unique demand on personalized
generative modeling. Existing works on generative recommendation often use
NLP-oriented Transformer architectures such as T5, GPT, LLaMA and M6, which are
heavy-weight and are not specifically designed for recommendation tasks.
LightLM tackles the issue by introducing a light-weight deep and narrow
Transformer architecture, which is specifically tailored for direct generation
of recommendation items. This structure is especially apt for straightforward
generative recommendation and stems from the observation that language model
does not have to be too wide for this task, as the input predominantly consists
of short tokens that are well-suited for the model's capacity. We also show
that our devised user and item ID indexing methods, i.e., Spectral
Collaborative Indexing (SCI) and Graph Collaborative Indexing (GCI), enables
the deep and narrow Transformer architecture to outperform large-scale language
models for recommendation. Besides, to address the hallucination problem of
generating items as output, we propose the constrained generation process for
generative recommenders. Experiments on real-world datasets show that LightLM
outperforms various competitive baselines in terms of both recommendation
accuracy and efficiency. The code can be found at
https://github.com/dongyuanjushi/LightLM
A generative model for protein contact networks
In this paper we present a generative model for protein contact networks. The
soundness of the proposed model is investigated by focusing primarily on
mesoscopic properties elaborated from the spectra of the graph Laplacian. To
complement the analysis, we study also classical topological descriptors, such
as statistics of the shortest paths and the important feature of modularity.
Our experiments show that the proposed model results in a considerable
improvement with respect to two suitably chosen generative mechanisms,
mimicking with better approximation real protein contact networks in terms of
diffusion properties elaborated from the Laplacian spectra. However, as well as
the other considered models, it does not reproduce with sufficient accuracy the
shortest paths structure. To compensate this drawback, we designed a second
step involving a targeted edge reconfiguration process. The ensemble of
reconfigured networks denotes improvements that are statistically significant.
As a byproduct of our study, we demonstrate that modularity, a well-known
property of proteins, does not entirely explain the actual network architecture
characterizing protein contact networks. In fact, we conclude that modularity,
intended as a quantification of an underlying community structure, should be
considered as an emergent property of the structural organization of proteins.
Interestingly, such a property is suitably optimized in protein contact
networks together with the feature of path efficiency.Comment: 18 pages, 67 reference
Spectral Detection on Sparse Hypergraphs
We consider the problem of the assignment of nodes into communities from a
set of hyperedges, where every hyperedge is a noisy observation of the
community assignment of the adjacent nodes. We focus in particular on the
sparse regime where the number of edges is of the same order as the number of
vertices. We propose a spectral method based on a generalization of the
non-backtracking Hashimoto matrix into hypergraphs. We analyze its performance
on a planted generative model and compare it with other spectral methods and
with Bayesian belief propagation (which was conjectured to be asymptotically
optimal for this model). We conclude that the proposed spectral method detects
communities whenever belief propagation does, while having the important
advantages to be simpler, entirely nonparametric, and to be able to learn the
rule according to which the hyperedges were generated without prior
information.Comment: 8 pages, 5 figure
Neural 3D Morphable Models: Spiral Convolutional Networks for 3D Shape Representation Learning and Generation
Generative models for 3D geometric data arise in many important applications
in 3D computer vision and graphics. In this paper, we focus on 3D deformable
shapes that share a common topological structure, such as human faces and
bodies. Morphable Models and their variants, despite their linear formulation,
have been widely used for shape representation, while most of the recently
proposed nonlinear approaches resort to intermediate representations, such as
3D voxel grids or 2D views. In this work, we introduce a novel graph
convolutional operator, acting directly on the 3D mesh, that explicitly models
the inductive bias of the fixed underlying graph. This is achieved by enforcing
consistent local orderings of the vertices of the graph, through the spiral
operator, thus breaking the permutation invariance property that is adopted by
all the prior work on Graph Neural Networks. Our operator comes by construction
with desirable properties (anisotropic, topology-aware, lightweight,
easy-to-optimise), and by using it as a building block for traditional deep
generative architectures, we demonstrate state-of-the-art results on a variety
of 3D shape datasets compared to the linear Morphable Model and other graph
convolutional operators.Comment: to appear at ICCV 201
- …