https://arxiv.org/abs/2402.19103 Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language ModelsLarge Language Models (LLMs) have shown impressive capabilities but still suffer from the issue of hallucinations. A significant type of this issue is the false premise hallucination, which we define as the phenomenon when LLMs generate hallucinated text..