846,412 research outputs found
Semantic Image Synthesis via Adversarial Learning
In this paper, we propose a way of synthesizing realistic images directly
with natural language description, which has many useful applications, e.g.
intelligent image manipulation. We attempt to accomplish such synthesis: given
a source image and a target text description, our model synthesizes images to
meet two requirements: 1) being realistic while matching the target text
description; 2) maintaining other image features that are irrelevant to the
text description. The model should be able to disentangle the semantic
information from the two modalities (image and text), and generate new images
from the combined semantics. To achieve this, we proposed an end-to-end neural
architecture that leverages adversarial learning to automatically learn
implicit loss functions, which are optimized to fulfill the aforementioned two
requirements. We have evaluated our model by conducting experiments on
Caltech-200 bird dataset and Oxford-102 flower dataset, and have demonstrated
that our model is capable of synthesizing realistic images that match the given
descriptions, while still maintain other features of original images.Comment: Accepted to ICCV 201
Test of Nuclear Wave Functions for Pseudospin Symmetry
Using the fact that pseudospin is an approximate symmetry of the Dirac
Hamiltonian with realistic scalar and vector mean fields, we derive the wave
functions of the pseudospin partners of eigenstates of a realistic Dirac
Hamiltonian and compare these wave functions with the wave functions of the
Dirac eigenstates.Comment: 11 pages, 4 figures, minor changes in text and figures to conform
with PRL requirement
A realistic assessment of methods for extracting gene/protein interactions from free text
Background: The automated extraction of gene and/or protein interactions from the literature is one of the most important targets of biomedical text mining research. In this paper we present a realistic evaluation of gene/protein interaction mining relevant to potential non-specialist users. Hence we have specifically avoided methods that are complex to install or require reimplementation, and we coupled our chosen extraction methods with a state-of-the-art biomedical named entity tagger. Results: Our results show: that performance across different evaluation corpora is extremely variable; that the use of tagged (as opposed to gold standard) gene and protein names has a significant impact on performance, with a drop in F-score of over 20 percentage points being commonplace; and that a simple keyword-based benchmark algorithm when coupled with a named entity tagger outperforms two of the tools most widely used to extract gene/protein interactions. Conclusion: In terms of availability, ease of use and performance, the potential non-specialist user community interested in automatically extracting gene and/or protein interactions from free text is poorly served by current tools and systems. The public release of extraction tools that are easy to install and use, and that achieve state-of-art levels of performance should be treated as a high priority by the biomedical text mining community
Generative Adversarial Text to Image Synthesis
Automatic synthesis of realistic images from text would be interesting and
useful, but current AI systems are still far from this goal. However, in recent
years generic and powerful recurrent neural network architectures have been
developed to learn discriminative text feature representations. Meanwhile, deep
convolutional generative adversarial networks (GANs) have begun to generate
highly compelling images of specific categories, such as faces, album covers,
and room interiors. In this work, we develop a novel deep architecture and GAN
formulation to effectively bridge these advances in text and image model- ing,
translating visual concepts from characters to pixels. We demonstrate the
capability of our model to generate plausible images of birds and flowers from
detailed text descriptions.Comment: ICML 201
- …
