185,565 research outputs found
Table Search Using a Deep Contextualized Language Model
Pretrained contextualized language models such as BERT have achieved
impressive results on various natural language processing benchmarks.
Benefiting from multiple pretraining tasks and large scale training corpora,
pretrained models can capture complex syntactic word relations. In this paper,
we use the deep contextualized language model BERT for the task of ad hoc table
retrieval. We investigate how to encode table content considering the table
structure and input length limit of BERT. We also propose an approach that
incorporates features from prior literature on table retrieval and jointly
trains them with BERT. In experiments on public datasets, we show that our best
approach can outperform the previous state-of-the-art method and BERT baselines
with a large margin under different evaluation metrics.Comment: Accepted at SIGIR 2020 (Long
Hierarchical Photo-Scene Encoder for Album Storytelling
In this paper, we propose a novel model with a hierarchical photo-scene
encoder and a reconstructor for the task of album storytelling. The photo-scene
encoder contains two sub-encoders, namely the photo and scene encoders, which
are stacked together and behave hierarchically to fully exploit the structure
information of the photos within an album. Specifically, the photo encoder
generates semantic representation for each photo while exploiting temporal
relationships among them. The scene encoder, relying on the obtained photo
representations, is responsible for detecting the scene changes and generating
scene representations. Subsequently, the decoder dynamically and attentively
summarizes the encoded photo and scene representations to generate a sequence
of album representations, based on which a story consisting of multiple
coherent sentences is generated. In order to fully extract the useful semantic
information from an album, a reconstructor is employed to reproduce the
summarized album representations based on the hidden states of the decoder. The
proposed model can be trained in an end-to-end manner, which results in an
improved performance over the state-of-the-arts on the public visual
storytelling (VIST) dataset. Ablation studies further demonstrate the
effectiveness of the proposed hierarchical photo-scene encoder and
reconstructor.Comment: 8 pages, 4 figure
- …