https://arxiv.org/abs/2303.17651 Self-Refine: Iterative Refinement with Self-FeedbackLike humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce Self-Refine, an approach for improving initial outputs from LLMs through iterative feedbackarxiv.org 아이디어는 굉장히 단순하다. LLM을 통해 생산된 글을 다시 동일한 LLM에 넣어 피..