반응형

multi agent 3

Improving Factuality and Reasoning in Language Models through Multiagent Debate - 논문 리뷰

https://arxiv.org/abs/2305.14325 Improving Factuality and Reasoning in Language Models through Multiagent DebateLarge language models (LLMs) have demonstrated remarkable capabilities in language generation, understanding, and few-shot learning in recent years. An extensive body of work has explored how their performance may be further improved through the tools of parxiv.org Agent 논문입니다! 그 중에서도 ..

ChatDev: Communicative Agents for Software Development - 논문 리뷰

https://arxiv.org/abs/2307.07924 ChatDev: Communicative Agents for Software DevelopmentSoftware development is a complex task that necessitates cooperation among multiple members with diverse skills. Numerous studies used deep learning to improve specific phases in a waterfall model, such as design, coding, and testing. However, the deep leaarxiv.org 이 논문도 이전에 보았던 마인크레프트 Agent와 비슷하게 Long term, S..

ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs - 논문 리뷰

https://arxiv.org/abs/2309.13007 ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMsLarge Language Models (LLMs) still struggle with natural language reasoning tasks. Motivated by the society of minds (Minsky, 1988), we propose ReConcile, a multi-model multi-agent framework designed as a round table conference among diverse LLM agents. Rearxiv.org 기존 LLM은 새로운 생각..

728x90
728x90