반응형

인공지능/논문 리뷰 or 진행 127

Editing Large Language Models: Problems, Methods, and Opportunities - 논문 리뷰

https://arxiv.org/abs/2305.13172 Editing Large Language Models: Problems, Methods, and OpportunitiesDespite the ability to train capable LLMs, the methodology for maintaining their relevancy and rectifying errors remains elusive. To this end, the past few years have witnessed a surge in techniques for editing LLMs, the objective of which is to efficientlarxiv.org 이 논문은 새로운 정보를 반영하지 못 하는 LLM의 단점을..

FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation - 논문 리뷰

https://arxiv.org/abs/2310.03214 FreshLLMs: Refreshing Large Language Models with Search Engine AugmentationMost large language models (LLMs) are trained once and never updated; thus, they lack the ability to dynamically adapt to our ever-changing world. In this work, we perform a detailed study of the factuality of LLM-generated text in the context of answeringarxiv.org검색을 통해 LLM의 최신 정보 미달을 해결하..

A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity - 논문 리뷰

https://arxiv.org/abs/2302.04023 A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and InteractivityThis paper proposes a framework for quantitatively evaluating interactive LLMs such as ChatGPT using publicly available data sets. We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application taskarx..

Language Models of Code are Few-Shot Commonsense Learners

https://arxiv.org/abs/2210.07128 Language Models of Code are Few-Shot Commonsense LearnersWe address the general task of structured commonsense reasoning: given a natural language input, the goal is to generate a graph such as an event -- or a reasoning-graph. To employ large language models (LMs) for this task, existing approaches ``serialize'arxiv.org이 모델은 명확하게 들어오지 않네요...?  COCOGEN은 구조적 상식(re..

How Can We Know What Language Models Know? - 논문 리뷰

https://arxiv.org/abs/1911.12543 How Can We Know What Language Models Know?Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession". These prompts are usually manually created, and quite possibly sub-oarxiv.orghttps://github.com/WooooDyy/LLM-Agent-Paper-List?tab=readm..

Eliciting Latent Predictions from Transformers with the Tuned Lens - 논문 리뷰

https://arxiv.org/abs/2303.08112 Eliciting Latent Predictions from Transformers with the Tuned LensWe analyze transformers from the perspective of iterative inference, seeking to understand how model predictions are refined layer by layer. To do so, we train an affine probe for each block in a frozen pretrained model, making it possible to decode everyarxiv.org 기존의 Logit Lens 방식은 Transformer의 출력..

Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models - 논문리뷰

https://arxiv.org/abs/2401.06102 Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language ModelsUnderstanding the internal representations of large language models (LLMs) can help explain models' behavior and verify their alignment with human values. Given the capabilities of LLMs in generating human-understandable text, we propose leveraging the modarxiv.org 이 방식은 출력을..

Emergent Linguistic Structure in Artificial Neural Networks Trained by Self-Supervision - 논문 리뷰

https://www.pnas.org/doi/10.1073/pnas.1907367117 이 논문은 Self-supervised learning을 통해 BERT가 학습하는데 구조적 의미를 잘 파악하고 학습하는 것인지 의문을 가지고 작성한 논문입니다.어텐션 프로브와 구조적 프로브를 사용하여, BERT가 문법적 관계와 계층적 구조를 학습하는 능력을 평가했으며, 그 결과 BERT가 이러한 의미를 효과적으로 이해하고 있음을 확인했습니다.다만, BERT가 학습한 언어 구조가 정확히 어떤 방식으로 구성되는지 완전히 설명하기 어렵고, 인간의 언어 습득 방식과는 다소 차이가 있다는 한계가 있습니다.    논문은 '자기 지도 학습(Self-Supervision)을 통해 훈련된 인공 신경망에서 나타나는 언어 구조'를 다룹니..

Visualizing and Measuring the Geometry of BERT - 논문 리뷰

https://arxiv.org/abs/1906.02715 Visualizing and Measuring the Geometry of BERTTransformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks appear to extract generally useful linguistic features. A natarxiv.org 이 논문은 BERT의 내부 임베딩 공간을 분석하여 대형 언어 모델(LLM)을 해석하려는 연구입니..

MemGPT: Towards LLMs as Operating Systems - 논문 리뷰

https://arxiv.org/abs/2310.08560 MemGPT: Towards LLMs as Operating SystemsLarge language models (LLMs) have revolutionized AI, but are constrained by limited context windows, hindering their utility in tasks like extended conversations and document analysis. To enable using context beyond limited context windows, we propose virtarxiv.org  MemGPT는 대규모 언어 모델의 제한된 컨텍스트 윈도우 문제를 해결하기 위해 운영 체제의 가상 메모리..

728x90
728x90