2 research outputs found
How Well Do Large Language Models Truly Ground?
Reliance on the inherent knowledge of Large Language Models (LLMs) can cause
issues such as hallucinations, lack of control, and difficulties in integrating
variable knowledge. To mitigate this, LLMs can be probed to generate responses
by grounding on external context, often given as input (knowledge-augmented
models). Yet, previous research is often confined to a narrow view of the term
"grounding", often only focusing on whether the response contains the correct
answer or not, which does not ensure the reliability of the entire response. To
address this limitation, we introduce a strict definition of grounding: a model
is considered truly grounded when its responses (1) fully utilize necessary
knowledge from the provided context, and (2) don't exceed the knowledge within
the contexts. We introduce a new dataset and a grounding metric to assess this
new definition and perform experiments across 13 LLMs of different sizes and
training methods to provide insights into the factors that influence grounding
performance. Our findings contribute to a better understanding of how to
improve grounding capabilities and suggest an area of improvement toward more
reliable and controllable LLM applications
Semiparametric Token-Sequence Co-Supervision
In this work, we introduce a semiparametric token-sequence co-supervision
training method. It trains a language model by simultaneously leveraging
supervision from the traditional next token prediction loss which is calculated
over the parametric token embedding space and the next sequence prediction loss
which is calculated over the nonparametric sequence embedding space. The
nonparametric sequence embedding space is constructed by a separate language
model tasked to condense an input text into a single representative embedding.
Our experiments demonstrate that a model trained via both supervisions
consistently surpasses models trained via each supervision independently.
Analysis suggests that this co-supervision encourages a broader generalization
capability across the model. Especially, the robustness of parametric token
space which is established during the pretraining step tends to effectively
enhance the stability of nonparametric sequence embedding space, a new space
established by another language model