반응형

prompt 3

CoD - Chain of Draft: Thinking Faster by Writing Less - 논문 리뷰

https://arxiv.org/abs/2502.18600 Chain of Draft: Thinking Faster by Writing LessLarge Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more effarxiv.org 논문에서는 효율성을 엄청 중요시하게 생각하빈다.CoT와 Token Count는 엄청 차이나지만..

Tree of Thoughts: Deliberate Problem Solving with Large Language Models - 논문 리뷰

https://arxiv.org/abs/2305.10601 Tree of Thoughts: Deliberate Problem Solving with Large Language ModelsLanguage models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that requirearxiv.org Few-Shot -> CoT -> SC-CoT -> ..

Progressive Prompts: Continual Learning for Language Models - 논문 리뷰

https://arxiv.org/abs/2301.12314 Progressive Prompts: Continual Learning for Language ModelsWe introduce Progressive Prompts - a simple and efficient approach for continual learning in language models. Our method allows forward transfer and resists catastrophic forgetting, without relying on data replay or a large number of task-specific parametearxiv.org 이 논문의 특징에 대해 크게 모르겠네요결국 Soft prompt tuni..

728x90
728x90