https://arxiv.org/abs/2502.18600 Chain of Draft: Thinking Faster by Writing LessLarge Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more effarxiv.org 논문에서는 효율성을 엄청 중요시하게 생각하빈다.CoT와 Token Count는 엄청 차이나지만..