1,859 research outputs found
Learning Multimodal Graph-to-Graph Translation for Molecular Optimization
We view molecular optimization as a graph-to-graph translation problem. The
goal is to learn to map from one molecular graph to another with better
properties based on an available corpus of paired molecules. Since molecules
can be optimized in different ways, there are multiple viable translations for
each input graph. A key challenge is therefore to model diverse translation
outputs. Our primary contributions include a junction tree encoder-decoder for
learning diverse graph translations along with a novel adversarial training
method for aligning distributions of molecules. Diverse output distributions in
our model are explicitly realized by low-dimensional latent vectors that
modulate the translation process. We evaluate our model on multiple molecular
optimization tasks and show that our model outperforms previous
state-of-the-art baselines
Hierarchically Structured Reinforcement Learning for Topically Coherent Visual Story Generation
We propose a hierarchically structured reinforcement learning approach to
address the challenges of planning for generating coherent multi-sentence
stories for the visual storytelling task. Within our framework, the task of
generating a story given a sequence of images is divided across a two-level
hierarchical decoder. The high-level decoder constructs a plan by generating a
semantic concept (i.e., topic) for each image in sequence. The low-level
decoder generates a sentence for each image using a semantic compositional
network, which effectively grounds the sentence generation conditioned on the
topic. The two decoders are jointly trained end-to-end using reinforcement
learning. We evaluate our model on the visual storytelling (VIST) dataset.
Empirical results from both automatic and human evaluations demonstrate that
the proposed hierarchically structured reinforced training achieves
significantly better performance compared to a strong flat deep reinforcement
learning baseline.Comment: Accepted to AAAI 201
- …