반응형

2025/01/06 5

ARAGOG: Advanced RAG Output Grading - 논문 리뷰

https://arxiv.org/abs/2404.01037 ARAGOG: Advanced RAG Output GradingRetrieval-Augmented Generation (RAG) is essential for integrating external knowledge into Large Language Model (LLM) outputs. While the literature on RAG is growing, it primarily focuses on systematic reviews and comparisons of new state-of-the-art (SoTA)arxiv.org 이 논문은 RAG 기술을 체계적으로 비교하며, 검색 정확도와 답변 유사성이라는 명확한 지표를 통해 RAG 시스템의 성..

DRAGIN: Dynamic Retrieval Augmented Generation based on the Information Needs of Large Language Models - 논문 리뷰

https://arxiv.org/abs/2403.10081 DRAGIN: Dynamic Retrieval Augmented Generation based on the Information Needs of Large Language ModelsDynamic retrieval augmented generation (RAG) paradigm actively decides when and what to retrieve during the text generation process of Large Language Models (LLMs). There are two key elements of this paradigm: identifying the optimal moment to activate thearxiv.o..

Financial Report Chunking for Effective Retrieval Augmented Generation - 논문 리뷰

https://arxiv.org/abs/2402.05131 Financial Report Chunking for Effective Retrieval Augmented GenerationChunking information is a key step in Retrieval Augmented Generation (RAG). Current research primarily centers on paragraph-level chunking. This approach treats all texts as equal and neglects the information contained in the structure of documents. We proarxiv.org 일단 제가 찾던 논문 중 하나입니다!Chunking을..

LumberChunker: Long-Form Narrative Document Segmentation - 논문 리뷰

https://arxiv.org/abs/2406.17526 LumberChunker: Long-Form Narrative Document SegmentationModern NLP tasks increasingly rely on dense retrieval methods to access up-to-date and relevant contextual information. We are motivated by the premise that retrieval benefits from segments that can vary in size such that a content's semantic independencearxiv.org 이 논문은 LLM을 통해 청크를 분리하네요그런데 이렇게 되면 리소스가 너무 과하..

DR-RAG: Applying Dynamic Document Relevance to Retrieval-Augmented Generation for Question-Answering - 논문 리뷰

https://arxiv.org/abs/2406.07348 DR-RAG: Applying Dynamic Document Relevance to Retrieval-Augmented Generation for Question-AnsweringRetrieval-Augmented Generation (RAG) has recently demonstrated the performance of Large Language Models (LLMs) in the knowledge-intensive tasks such as Question-Answering (QA). RAG expands the query context by incorporating external knowledge bases to enhaarxiv.org..

728x90
728x90