118 research outputs found
Bootstrapping Disjoint Datasets for Multilingual Multimodal Representation Learning
Recent work has highlighted the advantage of jointly learning grounded
sentence representations from multiple languages. However, the data used in
these studies has been limited to an aligned scenario: the same images
annotated with sentences in multiple languages. We focus on the more realistic
disjoint scenario in which there is no overlap between the images in
multilingual image--caption datasets. We confirm that training with aligned
data results in better grounded sentence representations than training with
disjoint data, as measured by image--sentence retrieval performance. In order
to close this gap in performance, we propose a pseudopairing method to generate
synthetically aligned English--German--image triplets from the disjoint sets.
The method works by first training a model on the disjoint data, and then
creating new triples across datasets using sentence similarity under the
learned model. Experiments show that pseudopairs improve image--sentence
retrieval performance compared to disjoint training, despite requiring no
external data or models. However, we do find that using an external machine
translation model to generate the synthetic data sets results in better
performance.Comment: 10 page
PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation
Vision-and-Language Navigation (VLN) requires the agent to follow language
instructions to navigate through 3D environments. One main challenge in VLN is
the limited availability of photorealistic training environments, which makes
it hard to generalize to new and unseen environments. To address this problem,
we propose PanoGen, a generation method that can potentially create an infinite
number of diverse panoramic environments conditioned on text. Specifically, we
collect room descriptions by captioning the room images in existing
Matterport3D environments, and leverage a state-of-the-art text-to-image
diffusion model to generate the new panoramic environments. We use recursive
outpainting over the generated images to create consistent 360-degree panorama
views. Our new panoramic environments share similar semantic information with
the original environments by conditioning on text descriptions, which ensures
the co-occurrence of objects in the panorama follows human intuition, and
creates enough diversity in room appearance and layout with image outpainting.
Lastly, we explore two ways of utilizing PanoGen in VLN pre-training and
fine-tuning. We generate instructions for paths in our PanoGen environments
with a speaker built on a pre-trained vision-and-language model for VLN
pre-training, and augment the visual observation with our panoramic
environments during agents' fine-tuning to avoid overfitting to seen
environments. Empirically, learning with our PanoGen environments achieves the
new state-of-the-art on the Room-to-Room, Room-for-Room, and CVDN datasets.
Pre-training with our PanoGen speaker data is especially effective for CVDN,
which has under-specified instructions and needs commonsense knowledge. Lastly,
we show that the agent can benefit from training with more generated panoramic
environments, suggesting promising results for scaling up the PanoGen
environments.Comment: Project Webpage: https://pano-gen.github.io
MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation
The recent popularity of text-to-image diffusion models (DM) can largely be
attributed to the intuitive interface they provide to users. The intended
generation can be expressed in natural language, with the model producing
faithful interpretations of text prompts. However, expressing complex or
nuanced ideas in text alone can be difficult. To ease image generation, we
propose MultiFusion that allows one to express complex and nuanced concepts
with arbitrarily interleaved inputs of multiple modalities and languages.
MutliFusion leverages pre-trained models and aligns them for integration into a
cohesive system, thereby avoiding the need for extensive training from scratch.
Our experimental results demonstrate the efficient transfer of capabilities
from individual modules to the downstream model. Specifically, the fusion of
all independent components allows the image generation module to utilize
multilingual, interleaved multimodal inputs despite being trained solely on
monomodal data in a single language
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Semantic specialization is the process of fine-tuning pre-trained
distributional word vectors using external lexical knowledge (e.g., WordNet) to
accentuate a particular semantic relation in the specialized vector space.
While post-processing specialization methods are applicable to arbitrary
distributional vectors, they are limited to updating only the vectors of words
occurring in external lexicons (i.e., seen words), leaving the vectors of all
other words unchanged. We propose a novel approach to specializing the full
distributional vocabulary. Our adversarial post-specialization method
propagates the external lexical knowledge to the full distributional space. We
exploit words seen in the resources as training examples for learning a global
specialization function. This function is learned by combining a standard
L2-distance loss with an adversarial loss: the adversarial component produces
more realistic output vectors. We show the effectiveness and robustness of the
proposed method across three languages and on three tasks: word similarity,
dialog state tracking, and lexical simplification. We report consistent
improvements over distributional word vectors and vectors specialized by other
state-of-the-art specialization frameworks. Finally, we also propose a
cross-lingual transfer method for zero-shot specialization which successfully
specializes a full target distributional space without any lexical knowledge in
the target language and without any bilingual data.Comment: Accepted at EMNLP 201
Semantic Systems. The Power of AI and Knowledge Graphs
This open access book constitutes the refereed proceedings of the 15th International Conference on Semantic Systems, SEMANTiCS 2019, held in Karlsruhe, Germany, in September 2019. The 20 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 88 submissions. They cover topics such as: web semantics and linked (open) data; machine learning and deep learning techniques; semantic information management and knowledge integration; terminology, thesaurus and ontology management; data mining and knowledge discovery; semantics in blockchain and distributed ledger technologies
Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models
Pretrained machine learning models are known to perpetuate and even amplify
existing biases in data, which can result in unfair outcomes that ultimately
impact user experience. Therefore, it is crucial to understand the mechanisms
behind those prejudicial biases to ensure that model performance does not
result in discriminatory behaviour toward certain groups or populations. In
this work, we define gender bias as our case study. We quantify bias
amplification in pretraining and after fine-tuning on three families of
vision-and-language models. We investigate the connection, if any, between the
two learning stages, and evaluate how bias amplification reflects on model
performance. Overall, we find that bias amplification in pretraining and after
fine-tuning are independent. We then examine the effect of continued
pretraining on gender-neutral data, finding that this reduces group
disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without
significantly compromising task performance.Comment: To appear in EMNLP 202
RS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation Model
Pre-trained Vision-Language Foundation Models utilizing extensive image-text
paired data have demonstrated unprecedented image-text association
capabilities, achieving remarkable results across various downstream tasks. A
critical challenge is how to make use of existing large-scale pre-trained VLMs,
which are trained on common objects, to perform the domain-specific transfer
for accomplishing domain-related downstream tasks. In this paper, we propose a
new framework that includes the Domain Foundation Model (DFM), bridging the gap
between the General Foundation Model (GFM) and domain-specific downstream
tasks. Moreover, we present an image-text paired dataset in the field of remote
sensing (RS), RS5M, which has 5 million RS images with English descriptions.
The dataset is obtained from filtering publicly available image-text paired
datasets and captioning label-only RS datasets with pre-trained VLM. These
constitute the first large-scale RS image-text paired dataset. Additionally, we
tried several Parameter-Efficient Fine-Tuning methods on RS5M to implement the
DFM. Experimental results show that our proposed dataset are highly effective
for various tasks, improving upon the baseline by in
zero-shot classification tasks, and obtaining good results in both
Vision-Language Retrieval and Semantic Localization tasks.
\url{https://github.com/om-ai-lab/RS5M}Comment: RS5M dataset v
- …