반응형

2025/02/10 4

Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation Extraction - 논문 리뷰

https://aclanthology.org/2023.findings-emnlp.153/ Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation ExtractionXilai Ma, Jing Li, Min Zhang. Findings of the Association for Computational Linguistics: EMNLP 2023. 2023.aclanthology.org 기존의 FSRE(Few-shot Relation Extraction)은 제한된 학습 샘플을 이용해 두 엔티티간 관계를 예측하는 문제를 다룬다.소수의 주석된 샘플 데이터만을 사용해 관계를 학습하고 예측해야하는 상황으로 메타 학습이나 신경 그래프, In-Con..

Calibrate Before Use: Improving Few-Shot Performance of Language Models - 논문

https://arxiv.org/abs/2102.09690 Calibrate Before Use: Improving Few-Shot Performance of Language ModelsGPT-3 can perform numerous tasks when provided a natural language prompt that contains a few training examples. We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the trainingarxiv.org 우리가 Few-Shot을 사용하면 언어 모델은 이에 ..

What Makes Chain-of-Thought Prompting Effective? A Counterfactual Study - 논문 리뷰

https://aclanthology.org/2023.findings-emnlp.101/ What Makes Chain-of-Thought Prompting Effective? A Counterfactual StudyAman Madaan, Katherine Hermann, Amir Yazdanbakhsh. Findings of the Association for Computational Linguistics: EMNLP 2023. 2023.aclanthology.org CoT는 LLM의 성능을 높이는데 사용되는 방법입니다.그러나 CoT를 통한 성능 증가의 이유는 아직 밝혀지지 않았고, 패턴, 기호, 잘못된 정보, 조작을 진행하여 다양한 조건에서 테스트를 진행합니다.   이러한 결과를 보면 CoT는 Few..

Automatic Chain of Thought Prompting in Large Language Models - 논문 리뷰

https://arxiv.org/abs/2210.03493 Automatic Chain of Thought Prompting in Large Language ModelsLarge language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simarxiv.org CoT의 성능은 입증되었지만 CoT의 예제 입력 방식은 상당히 귀찮은..

728x90
728x90