반응형

2025/03 25

DORA: Dynamic Optimization Prompt for Continuous Reflection of LLM-based Agent - 논문 리뷰

https://aclanthology.org/2025.coling-main.504/ DORA: Dynamic Optimization Prompt for Continuous Reflection of LLM-based AgentKun Li, Tingzhang Zhao, Wei Zhou, Songlin Hu. Proceedings of the 31st International Conference on Computational Linguistics. 2025.aclanthology.org 기존 Reflcetion은 성능을 올리긴 했지만 iteration이 증가할 수록 성능 향상이 더뎌졌다.위 그래프에서 보듯 Early Stop Reflection문제가 발생하였고, DORA(Dynamic and Optimized..

Towards Mitigating Hallucination in Large Language Models via Self-Reflection - 논문 리뷰

https://arxiv.org/abs/2310.06271 Towards Mitigating Hallucination in Large Language Models via Self-ReflectionLarge language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks. However, the practical deployment still faces challenges, notably the issue of "hallucination", where models generate plauarxiv.org 그럴듯하게 들리지만 사실이 아니거나 터무..

Hallucination 관련 논문 리뷰 : Detecting Hallucinations in Large Language Model Generation: A Token Probability Approach, A Mathematical Investigation of Hallucination and Creativity in GPT Models, Survey of hallucination in natural language generation

https://arxiv.org/abs/2405.19648 Detecting Hallucinations in Large Language Model Generation: A Token Probability ApproachConcerns regarding the propensity of Large Language Models (LLMs) to produce inaccurate outputs, also known as hallucinations, have escalated. Detecting them is vital for ensuring the reliability of applications relying on LLM-generated content. Current mearxiv.org Hallucinat..

RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation - 논문 리뷰

https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within GenerationLarge language models (LLMs) exhibit remarkable generative capabilities but often suffer from hallucinations. Retrieval-augmented generation (RAG) offers an effective solution by incorporating external knowledge, but existing methods still face several limarxiv.org  LLM은..

GeAR: Generation Augmented Retrieval - 논문리뷰

https://arxiv.org/abs/2501.02772 GeAR: Generation Augmented RetrievalDocument retrieval techniques form the foundation for the development of large-scale information systems. The prevailing methodology is to construct a bi-encoder and compute the semantic similarity. However, such scalar similarity is difficult to reflect earxiv.org 현재 존재하는 Bi-Encoder를 통한 유사도 계산은 정보를 충분히 반영하기 어렵고, 이해하기도 어렵다. 또한 ..

728x90
728x90