Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts?

CAS Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
ACL 2024 Main

*Corresponding Author

Abstract

While auxiliary information has become a key to enhancing Large Language Models (LLMs), relatively little is known about how LLMs merge these contexts, specifically contexts generated by LLMs and those retrieved from external sources. To investigate this, we formulate a systematic framework to identify whether LLMs' responses are attributed to either generated or retrieved contexts. To easily trace the origin of the response, we construct datasets with conflicting contexts, i.e., each question is paired with both generated and retrieved contexts, yet only one of them contains the correct answer. Our experiments reveal a significant bias in several LLMs (GPT-4/3.5 and Llama2) to favor generated contexts, even when they provide incorrect information. We further identify two key factors contributing to this bias: i) contexts generated by LLMs typically show greater similarity to the questions, increasing their likelihood of being selected; ii) the segmentation process used in retrieved contexts disrupts their completeness, thereby hindering their full utilization in LLMs. Our analysis enhances the understanding of how LLMs merge diverse contexts, offers valuable insights for advancing current LLM augmentation methods, and highlights the risk of generated misinformation for retrieval-augmented LLMs.

Context-Conflicting Datasets

We construct context-conflicting datasets where only one of generated and retrieved contexts contain correct answer for the question. Finally, we get AIG and AIR subsets.

  • AIG: Answer In Generated Contexts
  • AIR: Answer In Retrieved Contexts

MY ALT TEXT

Figure 1. The framework of constructing context-conflicting datasets.

LLMs Prefer Generated Contexts

LLMs prefer generated contexts, even when they are incorrect. This bias is consistent across various generator, reader and retriever models.

MY ALT TEXT

Figure 2: The extent of LLMs' bias towards generated contexts on NQ-AIR datasets where generated contexts are wrong.

Why LLMs Prefer Generated Contexts?

  • Confirmation bias is not a key factor: LLMs maintain a significant preference for generated contexts when they contain information inconsistent with LLMs’ parametric knowledge.
  • Text similarity is a significant factor: compared to retrieved contexts, generated contexts typically exhibit a higher degree of similarity to the questions, even when they contain incorrect information. The samples with a larger similarity gap between generated and retrieved contexts exhibit a more pronounced bias.
  • Semantic completeness matters: LLMs tend to favor contexts with semantic integrity. The segmentation process used in retrieved contexts may disrupt their completeness, thereby hindering their full utilization in LLMs.
MY ALT TEXT

Figure 3: Generated contexts typically exhibit a higher degree of similarity to the questions.

BibTeX

@inproceedings{tan2024blinded,
        title={Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts?},
        author={Tan, Hexiang and Sun, Fei and Yang, Wanli and Wang, Yuanzhuo and Cao, Qi and Cheng, Xueqi},
        booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
        pages={6207--6227},
        year={2024}
      }