6 research outputs found
Text Retrieval with Multi-Stage Re-Ranking Models
The text retrieval is the task of retrieving similar documents to a search
query, and it is important to improve retrieval accuracy while maintaining a
certain level of retrieval speed. Existing studies have reported accuracy
improvements using language models, but many of these do not take into account
the reduction in search speed that comes with increased performance. In this
study, we propose three-stage re-ranking model using model ensembles or larger
language models to improve search accuracy while minimizing the search delay.
We ranked the documents by BM25 and language models, and then re-ranks by a
model ensemble or a larger language model for documents with high similarity to
the query. In our experiments, we train the MiniLM language model on the
MS-MARCO dataset and evaluate it in a zero-shot setting. Our proposed method
achieves higher retrieval accuracy while reducing the retrieval speed decay
Controlling keywords and their positions in text generation
One of the challenges in text generation is to control generation as intended
by a user. Previous studies have proposed to specify the keywords that should
be included in the generated text. However, this is insufficient to generate
text which reflect the user intent. For example, placing the important keyword
beginning of the text would helps attract the reader's attention, but existing
methods do not enable such flexible control. In this paper, we tackle a novel
task of controlling not only keywords but also the position of each keyword in
the text generation. To this end, we show that a method using special tokens
can control the relative position of keywords. Experimental results on
summarization and story generation tasks show that the proposed method can
control keywords and their positions. We also demonstrate that controlling the
keyword positions can generate summary texts that are closer to the user's
intent than baseline. We release our code