반응형

인공지능/논문 리뷰 or 진행 133

How Can We Know What Language Models Know? - 논문 리뷰

https://arxiv.org/abs/1911.12543 How Can We Know What Language Models Know?Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession". These prompts are usually manually created, and quite possibly sub-oarxiv.orghttps://github.com/WooooDyy/LLM-Agent-Paper-List?tab=readm..

Eliciting Latent Predictions from Transformers with the Tuned Lens - 논문 리뷰

https://arxiv.org/abs/2303.08112 Eliciting Latent Predictions from Transformers with the Tuned LensWe analyze transformers from the perspective of iterative inference, seeking to understand how model predictions are refined layer by layer. To do so, we train an affine probe for each block in a frozen pretrained model, making it possible to decode everyarxiv.org 기존의 Logit Lens 방식은 Transformer의 출력..

Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models - 논문리뷰

https://arxiv.org/abs/2401.06102 Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language ModelsUnderstanding the internal representations of large language models (LLMs) can help explain models' behavior and verify their alignment with human values. Given the capabilities of LLMs in generating human-understandable text, we propose leveraging the modarxiv.org 이 방식은 출력을..

Emergent Linguistic Structure in Artificial Neural Networks Trained by Self-Supervision - 논문 리뷰

https://www.pnas.org/doi/10.1073/pnas.1907367117 이 논문은 Self-supervised learning을 통해 BERT가 학습하는데 구조적 의미를 잘 파악하고 학습하는 것인지 의문을 가지고 작성한 논문입니다.어텐션 프로브와 구조적 프로브를 사용하여, BERT가 문법적 관계와 계층적 구조를 학습하는 능력을 평가했으며, 그 결과 BERT가 이러한 의미를 효과적으로 이해하고 있음을 확인했습니다.다만, BERT가 학습한 언어 구조가 정확히 어떤 방식으로 구성되는지 완전히 설명하기 어렵고, 인간의 언어 습득 방식과는 다소 차이가 있다는 한계가 있습니다.    논문은 '자기 지도 학습(Self-Supervision)을 통해 훈련된 인공 신경망에서 나타나는 언어 구조'를 다룹니..

Visualizing and Measuring the Geometry of BERT - 논문 리뷰

https://arxiv.org/abs/1906.02715 Visualizing and Measuring the Geometry of BERTTransformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks appear to extract generally useful linguistic features. A natarxiv.org 이 논문은 BERT의 내부 임베딩 공간을 분석하여 대형 언어 모델(LLM)을 해석하려는 연구입니..

MemGPT: Towards LLMs as Operating Systems - 논문 리뷰

https://arxiv.org/abs/2310.08560 MemGPT: Towards LLMs as Operating SystemsLarge language models (LLMs) have revolutionized AI, but are constrained by limited context windows, hindering their utility in tasks like extended conversations and document analysis. To enable using context beyond limited context windows, we propose virtarxiv.org  MemGPT는 대규모 언어 모델의 제한된 컨텍스트 윈도우 문제를 해결하기 위해 운영 체제의 가상 메모리..

Interpreting the Second-Order Effects of Neurons in CLIP - 논문 리뷰

https://arxiv.org/abs/2406.04341 Interpreting the Second-Order Effects of Neurons in CLIPWe interpret the function of individual neurons in CLIP by automatically describing them using text. Analyzing the direct effects (i.e. the flow from a neuron through the residual stream to the output) or the indirect effects (overall contribution) fails tarxiv.org 1차적으로 뉴런의 활성화 정도만을 확인하면 뉴런의 특성을 제대로 표현할 수 없..

How Much Knowledge Can You Pack Into the Parameters of a Language Model? - 논문 리뷰

https://arxiv.org/abs/2002.08910 How Much Knowledge Can You Pack Into the Parameters of a Language Model?It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-traarxiv.org  이 논문은 대규모 언어 모델(T5)이 외부 지식..

A Multimodal Automated Interpretability Agent

https://arxiv.org/abs/2404.14394 A Multimodal Automated Interpretability AgentThis paper describes MAIA, a Multimodal Automated Interpretability Agent. MAIA is a system that uses neural models to automate neural model understanding tasks like feature interpretation and failure mode discovery. It equips a pre-trained vision-languagearxiv.org  이 논문은 수 많은 실험을 통해 특정 사진에만 나타나는 Feature를 찾아내는 것인데 사진도 생..

Natural Language Processing (almost) from Scratch

https://arxiv.org/abs/1103.0398 Natural Language Processing (almost) from ScratchWe propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility isarxiv.org 이 논문은 기존 태스크별 특징 공학을 제거하고, 대규모 비지도 데이터를 활용해 End-to-E..

728x90
728x90