반응형

2025/02/11 3

Tree of Thoughts: Deliberate Problem Solving with Large Language Models - 논문 리뷰

https://arxiv.org/abs/2305.10601 Tree of Thoughts: Deliberate Problem Solving with Large Language ModelsLanguage models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that requirearxiv.org Few-Shot -> CoT -> SC-CoT -> ..

Self-Consistency Improves Chain of Thought Reasoning in Language Models

https://arxiv.org/abs/2203.11171 Self-Consistency Improves Chain of Thought Reasoning in Language ModelsChain-of-thought prompting combined with pre-trained large language models has achieved encouraging results on complex reasoning tasks. In this paper, we propose a new decoding strategy, self-consistency, to replace the naive greedy decoding used in chain-arxiv.org 기존 LLM을 활용한 CoT에선 Greedy Dec..

AutoReason: Automatic Few-Shot Reasoning Decomposition - 논문 리뷰

https://arxiv.org/abs/2412.06975 AutoReason: Automatic Few-Shot Reasoning DecompositionChain of Thought (CoT) was introduced in recent research as a method for improving step-by-step reasoning in Large Language Models. However, CoT has limited applications such as its need for hand-crafted few-shot exemplar prompts and no capability to adjusarxiv.org CoT에서 적절한 Few-Shot을 만드는 것은 항상 문제입니다.그러나 확장성, ..

728x90
728x90