https://arxiv.org/abs/2401.06102 Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language ModelsUnderstanding the internal representations of large language models (LLMs) can help explain models' behavior and verify their alignment with human values. Given the capabilities of LLMs in generating human-understandable text, we propose leveraging the modarxiv.org 이 방식은 출력을..