71 research outputs found
WizardLM: Empowering Large Language Models to Follow Complex Instructions
Training large language models (LLM) with open-domain instruction following
data brings colossal success. However, manually creating such instruction data
is very time-consuming and labor-intensive. Moreover, humans may struggle to
produce high-complexity instructions. In this paper, we show an avenue for
creating large amounts of instruction data with varying levels of complexity
using LLM instead of humans. Starting with an initial set of instructions, we
use our proposed Evol-Instruct to rewrite them step by step into more complex
instructions. Then, we mix all generated instruction data to fine-tune LLaMA.
We call the resulting model WizardLM. Human evaluations on a
complexity-balanced test bed show that instructions from Evol-Instruct are
superior to human-created ones. By analyzing the human evaluation results of
the high complexity part, we demonstrate that outputs from our WizardLM model
are preferred to outputs from OpenAI ChatGPT. Even though WizardLM still lags
behind ChatGPT in some aspects, our findings suggest that fine-tuning with
AI-evolved instructions is a promising direction for enhancing large language
models. Our codes and generated data are public at
https://github.com/nlpxucan/WizardLMComment: large language model, instruction fine-tun
Synergistic Interplay between Search and Large Language Models for Information Retrieval
Information retrieval (IR) plays a crucial role in locating relevant
resources from vast amounts of data, and its applications have evolved from
traditional knowledge bases to modern retrieval models (RMs). The emergence of
large language models (LLMs) has further revolutionized the IR field by
enabling users to interact with search systems in natural languages. In this
paper, we explore the advantages and disadvantages of LLMs and RMs,
highlighting their respective strengths in understanding user-issued queries
and retrieving up-to-date information. To leverage the benefits of both
paradigms while circumventing their limitations, we propose InteR, a novel
framework that facilitates information refinement through synergy between RMs
and LLMs. InteR allows RMs to expand knowledge in queries using LLM-generated
knowledge collections and enables LLMs to enhance prompt formulation using
retrieved documents. This iterative refinement process augments the inputs of
RMs and LLMs, leading to more accurate retrieval. Experiments on large-scale
retrieval benchmarks involving web search and low-resource retrieval tasks
demonstrate that InteR achieves overall superior zero-shot retrieval
performance compared to state-of-the-art methods, even those using relevance
judgment. Source code is available at https://github.com/Cyril-JZ/InteRComment: Pre-print. Work in progres
LexMAE: Lexicon-Bottlenecked Pretraining for Large-Scale Retrieval
In large-scale retrieval, the lexicon-weighting paradigm, learning weighted
sparse representations in vocabulary space, has shown promising results with
high quality and low latency. Despite it deeply exploiting the
lexicon-representing capability of pre-trained language models, a crucial gap
remains between language modeling and lexicon-weighting retrieval -- the former
preferring certain or low-entropy words whereas the latter favoring pivot or
high-entropy words -- becoming the main barrier to lexicon-weighting
performance for large-scale retrieval. To bridge this gap, we propose a
brand-new pre-training framework, lexicon-bottlenecked masked autoencoder
(LexMAE), to learn importance-aware lexicon representations. Essentially, we
present a lexicon-bottlenecked module between a normal language modeling
encoder and a weakened decoder, where a continuous bag-of-words bottleneck is
constructed to learn a lexicon-importance distribution in an unsupervised
fashion. The pre-trained LexMAE is readily transferred to the lexicon-weighting
retrieval via fine-tuning. On the ad-hoc retrieval benchmark, MS-Marco, it
achieves 42.6% MRR@10 with 45.8 QPS for the passage dataset and 44.4% MRR@100
with 134.8 QPS for the document dataset, by a CPU machine. And LexMAE shows
state-of-the-art zero-shot transfer capability on BEIR benchmark with 12
datasets.Comment: Appeared at ICLR 202
- …