818 research outputs found
Encoding Robust Representation for Graph Generation
Generative networks have made it possible to generate meaningful signals such
as images and texts from simple noise. Recently, generative methods based on
GAN and VAE were developed for graphs and graph signals. However, the
mathematical properties of these methods are unclear, and training good
generative models is difficult. This work proposes a graph generation model
that uses a recent adaptation of Mallat's scattering transform to graphs. The
proposed model is naturally composed of an encoder and a decoder. The encoder
is a Gaussianized graph scattering transform, which is robust to signal and
graph manipulation. The decoder is a simple fully connected network that is
adapted to specific tasks, such as link prediction, signal generation on graphs
and full graph and signal generation. The training of our proposed system is
efficient since it is only applied to the decoder and the hardware requirements
are moderate. Numerical results demonstrate state-of-the-art performance of the
proposed system for both link prediction and graph and signal generation.Comment: 9 pages, 7 figures, 6 table
Learning Multimodal Graph-to-Graph Translation for Molecular Optimization
We view molecular optimization as a graph-to-graph translation problem. The
goal is to learn to map from one molecular graph to another with better
properties based on an available corpus of paired molecules. Since molecules
can be optimized in different ways, there are multiple viable translations for
each input graph. A key challenge is therefore to model diverse translation
outputs. Our primary contributions include a junction tree encoder-decoder for
learning diverse graph translations along with a novel adversarial training
method for aligning distributions of molecules. Diverse output distributions in
our model are explicitly realized by low-dimensional latent vectors that
modulate the translation process. We evaluate our model on multiple molecular
optimization tasks and show that our model outperforms previous
state-of-the-art baselines
- …