반응형

인공지능 760

KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents - 논문 리뷰

https://arxiv.org/abs/2403.03101 KnowAgent: Knowledge-Augmented Planning for LLM-Based AgentsLarge Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges, especially when interacting with environments through generating executable actions. This inadequacyarxiv.org https://zjunlp.github.io/project/KnowAg..

Understanding the planning of LLM agents: A survey - 논문 리뷰

https://arxiv.org/abs/2402.02716 Understanding the planning of LLM agents: A surveyAs Large Language Models (LLMs) have shown significant intelligence, the progress to leverage LLMs as planning modules of autonomous agents has attracted more attention. This survey provides the first systematic view of LLM-based agents planning, coveringarxiv.org 첫 번째 Planning Survey논문이라네요Task Decomposition은 분할 정..

Dynamic Planning for LLM-based Graphical User Interface Automation - 논문 리뷰

https://arxiv.org/abs/2410.00467 Dynamic Planning for LLM-based Graphical User Interface AutomationThe advent of large language models (LLMs) has spurred considerable interest in advancing autonomous LLMs-based agents, particularly in intriguing applications within smartphone graphical user interfaces (GUIs). When presented with a task goal, these agentarxiv.org 기존 ReAct방식은 너무 길어져서 GUI Agent나 현실..

LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models - 논문 리뷰

https://arxiv.org/abs/2212.04088 LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language ModelsThis study focuses on using large language models (LLMs) as a planner for embodied agents that can follow natural language instructions to complete complex tasks in a visually-perceived environment. The high data cost and poor sample efficiency of existingarxiv.orghttps://dki-la..

PoT, RoT, SoT, CoCoT, Active Prompt - 논문 리뷰

하나 하나 따로 하기엔 양이 애매하거나, 저는 동의하지 못하는 부분이 있는 논문들이라 하나로 뭉쳤습니다.https://arxiv.org/abs/2211.12588 Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning TasksRecently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-..

Graph of Thoughts: Solving Elaborate Problems with Large Language Models - 논문 리뷰

https://arxiv.org/abs/2308.09687 Graph of Thoughts: Solving Elaborate Problems with Large Language ModelsWe introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the abarxiv.orgToT가 이젠 GoT로 발전했습니다.2025.02...

Tree of Thoughts: Deliberate Problem Solving with Large Language Models - 논문 리뷰

https://arxiv.org/abs/2305.10601 Tree of Thoughts: Deliberate Problem Solving with Large Language ModelsLanguage models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that requirearxiv.org Few-Shot -> CoT -> SC-CoT -> ..

Self-Consistency Improves Chain of Thought Reasoning in Language Models

https://arxiv.org/abs/2203.11171 Self-Consistency Improves Chain of Thought Reasoning in Language ModelsChain-of-thought prompting combined with pre-trained large language models has achieved encouraging results on complex reasoning tasks. In this paper, we propose a new decoding strategy, self-consistency, to replace the naive greedy decoding used in chain-arxiv.org 기존 LLM을 활용한 CoT에선 Greedy Dec..

AutoReason: Automatic Few-Shot Reasoning Decomposition - 논문 리뷰

https://arxiv.org/abs/2412.06975 AutoReason: Automatic Few-Shot Reasoning DecompositionChain of Thought (CoT) was introduced in recent research as a method for improving step-by-step reasoning in Large Language Models. However, CoT has limited applications such as its need for hand-crafted few-shot exemplar prompts and no capability to adjusarxiv.org CoT에서 적절한 Few-Shot을 만드는 것은 항상 문제입니다.그러나 확장성, ..

Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation Extraction - 논문 리뷰

https://aclanthology.org/2023.findings-emnlp.153/ Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation ExtractionXilai Ma, Jing Li, Min Zhang. Findings of the Association for Computational Linguistics: EMNLP 2023. 2023.aclanthology.org 기존의 FSRE(Few-shot Relation Extraction)은 제한된 학습 샘플을 이용해 두 엔티티간 관계를 예측하는 문제를 다룬다.소수의 주석된 샘플 데이터만을 사용해 관계를 학습하고 예측해야하는 상황으로 메타 학습이나 신경 그래프, In-Con..

728x90
728x90