16 research outputs found
Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models
Conditioned diffusion models have demonstrated state-of-the-art text-to-image
synthesis capacity. Recently, most works focus on synthesizing independent
images; While for real-world applications, it is common and necessary to
generate a series of coherent images for story-stelling. In this work, we
mainly focus on story visualization and continuation tasks and propose AR-LDM,
a latent diffusion model auto-regressively conditioned on history captions and
generated images. Moreover, AR-LDM can generalize to new characters through
adaptation. To our best knowledge, this is the first work successfully
leveraging diffusion models for coherent visual story synthesizing.
Quantitative results show that AR-LDM achieves SoTA FID scores on PororoSV,
FlintstonesSV, and the newly introduced challenging dataset VIST containing
natural images. Large-scale human evaluations show that AR-LDM has superior
performance in terms of quality, relevance, and consistency.Comment: Technical Repor
Prompt Switch: Efficient CLIP Adaptation for Text-Video Retrieval
In text-video retrieval, recent works have benefited from the powerful
learning capabilities of pre-trained text-image foundation models (e.g., CLIP)
by adapting them to the video domain. A critical problem for them is how to
effectively capture the rich semantics inside the video using the image encoder
of CLIP. To tackle this, state-of-the-art methods adopt complex cross-modal
modeling techniques to fuse the text information into video frame
representations, which, however, incurs severe efficiency issues in large-scale
retrieval systems as the video representations must be recomputed online for
every text query. In this paper, we discard this problematic cross-modal fusion
process and aim to learn semantically-enhanced representations purely from the
video, so that the video representations can be computed offline and reused for
different texts. Concretely, we first introduce a spatial-temporal "Prompt
Cube" into the CLIP image encoder and iteratively switch it within the encoder
layers to efficiently incorporate the global video semantics into frame
representations. We then propose to apply an auxiliary video captioning
objective to train the frame representations, which facilitates the learning of
detailed video semantics by providing fine-grained guidance in the semantic
space. With a naive temporal fusion strategy (i.e., mean-pooling) on the
enhanced frame representations, we obtain state-of-the-art performances on
three benchmark datasets, i.e., MSR-VTT, MSVD, and LSMDC.Comment: to be appeared in ICCV202
Providing Definitive Learning Direction for Relation Classification System
Deep neural network has adequately revealed its superiority of solving various tasks in Natural Language Processing, especially for relation classification. However, unlike traditional feature-engineering methods that targetedly extract well-designed features for specific task, the diversity of input format for deep learning is limited; word sequence as input is the frequently used setting. Therefore, the input of neural network, to some extent, lacks pertinence. For relation classification task, it is not uncommon that, without specific entity pair, a sentence contains various relation types; therefore, entity pair indicates the distribution of the crucial information in input sentence for recognizing specific relation. Aiming at this characteristic, in this paper, several strategies are proposed to integrate entity pair information into the application of deep learning in relation classification task, in a way to provide definitive learning direction for neural network. Experimental results on the SemEval-2010 Task 8 dataset show that our method outperforms most of the state-of-the-art models, without external linguistic features
Multi-Task Self-Supervised Learning for Disfluency Detection
Most existing approaches to disfluency detection heavily rely on
human-annotated data, which is expensive to obtain in practice. To tackle the
training data bottleneck, we investigate methods for combining multiple
self-supervised tasks-i.e., supervised tasks where data can be collected
without manual labeling. First, we construct large-scale pseudo training data
by randomly adding or deleting words from unlabeled news data, and propose two
self-supervised pre-training tasks: (i) tagging task to detect the added noisy
words. (ii) sentence classification to distinguish original sentences from
grammatically-incorrect sentences. We then combine these two tasks to jointly
train a network. The pre-trained network is then fine-tuned using
human-annotated disfluency detection training data. Experimental results on the
commonly used English Switchboard test set show that our approach can achieve
competitive performance compared to the previous systems (trained using the
full dataset) by using less than 1% (1000 sentences) of the training data. Our
method trained on the full dataset significantly outperforms previous methods,
reducing the error by 21% on English Switchboard
Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs
Large-scale knowledge graphs (KGs) are shown to become more important in
current information systems. To expand the coverage of KGs, previous studies on
knowledge graph completion need to collect adequate training instances for
newly-added relations. In this paper, we consider a novel formulation,
zero-shot learning, to free this cumbersome curation. For newly-added
relations, we attempt to learn their semantic features from their text
descriptions and hence recognize the facts of unseen relations with no examples
being seen. For this purpose, we leverage Generative Adversarial Networks
(GANs) to establish the connection between text and knowledge graph domain: The
generator learns to generate the reasonable relation embeddings merely with
noisy text descriptions. Under this setting, zero-shot learning is naturally
converted to a traditional supervised classification task. Empirically, our
method is model-agnostic that could be potentially applied to any version of KG
embeddings, and consistently yields performance improvements on NELL and Wiki
dataset