
https://arxiv.org/abs/2309.11495 Chain-of-Verification Reduces Hallucination in Large Language ModelsGeneration of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models. We study the ability of language models to deliberate on the responses they give in order to correct their mistakes. We developarxiv.org https://aclanthology.org/2024.fi..