42,365 research outputs found
GraphMaker: Can Diffusion Models Generate Large Attributed Graphs?
Large-scale graphs with node attributes are fundamental in real-world
scenarios, such as social and financial networks. The generation of synthetic
graphs that emulate real-world ones is pivotal in graph machine learning,
aiding network evolution understanding and data utility preservation when
original data cannot be shared. Traditional models for graph generation suffer
from limited model capacity. Recent developments in diffusion models have shown
promise in merely graph structure generation or the generation of small
molecular graphs with attributes. However, their applicability to large
attributed graphs remains unaddressed due to challenges in capturing intricate
patterns and scalability. This paper introduces GraphMaker, a novel diffusion
model tailored for generating large attributed graphs. We study the diffusion
models that either couple or decouple graph structure and node attribute
generation to address their complex correlation. We also employ node-level
conditioning and adopt a minibatch strategy for scalability. We further propose
a new evaluation pipeline using models trained on generated synthetic graphs
and tested on original graphs to evaluate the quality of synthetic data.
Empirical evaluations on real-world datasets showcase GraphMaker's superiority
in generating realistic and diverse large-attributed graphs beneficial for
downstream tasks.Comment: Code available at https://github.com/Graph-COM/GraphMake
The Advantage of Evidential Attributes in Social Networks
Nowadays, there are many approaches designed for the task of detecting
communities in social networks. Among them, some methods only consider the
topological graph structure, while others take use of both the graph structure
and the node attributes. In real-world networks, there are many uncertain and
noisy attributes in the graph. In this paper, we will present how we detect
communities in graphs with uncertain attributes in the first step. The
numerical, probabilistic as well as evidential attributes are generated
according to the graph structure. In the second step, some noise will be added
to the attributes. We perform experiments on graphs with different types of
attributes and compare the detection results in terms of the Normalized Mutual
Information (NMI) values. The experimental results show that the clustering
with evidential attributes gives better results comparing to those with
probabilistic and numerical attributes. This illustrates the advantages of
evidential attributes.Comment: 20th International Conference on Information Fusion, Jul 2017, Xi'an,
Chin
Joint Video and Text Parsing for Understanding Events and Answering Queries
We propose a framework for parsing video and text jointly for understanding
events and answering user queries. Our framework produces a parse graph that
represents the compositional structures of spatial information (objects and
scenes), temporal information (actions and events) and causal information
(causalities between events and fluents) in the video and text. The knowledge
representation of our framework is based on a spatial-temporal-causal And-Or
graph (S/T/C-AOG), which jointly models possible hierarchical compositions of
objects, scenes and events as well as their interactions and mutual contexts,
and specifies the prior probabilistic distribution of the parse graphs. We
present a probabilistic generative model for joint parsing that captures the
relations between the input video/text, their corresponding parse graphs and
the joint parse graph. Based on the probabilistic model, we propose a joint
parsing system consisting of three modules: video parsing, text parsing and
joint inference. Video parsing and text parsing produce two parse graphs from
the input video and text respectively. The joint inference module produces a
joint parse graph by performing matching, deduction and revision on the video
and text parse graphs. The proposed framework has the following objectives:
Firstly, we aim at deep semantic parsing of video and text that goes beyond the
traditional bag-of-words approaches; Secondly, we perform parsing and reasoning
across the spatial, temporal and causal dimensions based on the joint S/T/C-AOG
representation; Thirdly, we show that deep joint parsing facilitates subsequent
applications such as generating narrative text descriptions and answering
queries in the forms of who, what, when, where and why. We empirically
evaluated our system based on comparison against ground-truth as well as
accuracy of query answering and obtained satisfactory results
Auto-Encoding Scene Graphs for Image Captioning
We propose Scene Graph Auto-Encoder (SGAE) that incorporates the language
inductive bias into the encoder-decoder image captioning framework for more
human-like captions. Intuitively, we humans use the inductive bias to compose
collocations and contextual inference in discourse. For example, when we see
the relation `person on bike', it is natural to replace `on' with `ride' and
infer `person riding bike on a road' even the `road' is not evident. Therefore,
exploiting such bias as a language prior is expected to help the conventional
encoder-decoder models less likely overfit to the dataset bias and focus on
reasoning. Specifically, we use the scene graph --- a directed graph
() where an object node is connected by adjective nodes and
relationship nodes --- to represent the complex structural layout of both image
() and sentence (). In the textual domain, we use
SGAE to learn a dictionary () that helps to reconstruct sentences
in the pipeline, where encodes the desired language prior;
in the vision-language domain, we use the shared to guide the
encoder-decoder in the pipeline. Thanks to the scene graph
representation and shared dictionary, the inductive bias is transferred across
domains in principle. We validate the effectiveness of SGAE on the challenging
MS-COCO image captioning benchmark, e.g., our SGAE-based single-model achieves
a new state-of-the-art CIDEr-D on the Karpathy split, and a competitive
CIDEr-D (c40) on the official server even compared to other ensemble
models
Graph-based discovery of ontology change patterns
Ontologies can support a variety of purposes, ranging from capturing conceptual knowledge to the organisation of digital content and information. However, information systems are always subject to change and ontology change management can pose challenges. We investigate ontology change representation and discovery of change patterns.
Ontology changes are formalised as graph-based change logs. We use attributed graphs, which are typed over a generic graph with node and edge attribution.We analyse ontology change logs, represented as graphs, and identify frequent change sequences. Such sequences are applied as a reference in order to discover reusable, often domain-specific and usagedriven change patterns. We describe the pattern discovery algorithms and measure their performance using experimental result
- âŠ