반응형

전체 글 928

동역학 9주차 정리

시험 전 마지막 진도가 충돌이었습니다.그래서 나왔던 식은 위와 같았습니다. 이제부턴 충돌에 각이 생기고, Line of impact만 생각해서 연산하면 됩니다. 이 문제에서 사이즈는 무시해도 됩니다.X축 충돌만 존재하므로 y축 속도는 변함 없고, mv + mv = mv + mv에 잘 넣어서 연산하고, e = (v - v) / (v - v)를 구하면 끝 입니다. 이제 각 운동량이 나옵니다.더보기운동량은 물체의 운동 상태를 나타내는 물리량으로, 크게 두 가지 종류가 있습니다: 선운동량(Linear Momentum)과 각운동량(Angular Momentum). 각각에 대해 자세히 설명하면 다음과 같습니다:1. 선운동량 (Linear Momentum)정의선운동량은 질량이 있는 물체가 일정한 속도로 운동할 때, ..

기타 2024.11.17

An X-Ray Is Worth 15 Features: Sparse Autoencoders for Interpretable Radiology Report Generation

https://arxiv.org/abs/2410.03334 An X-Ray Is Worth 15 Features: Sparse Autoencoders for Interpretable Radiology Report GenerationRadiological services are experiencing unprecedented demand, leading to increased interest in automating radiology report generation. Existing Vision-Language Models (VLMs) suffer from hallucinations, lack interpretability, and require expensive fine-tuninarxiv.org이미지를..

The Geometry of Concepts: Sparse Autoencoder Feature Structure - 논문 리뷰

https://arxiv.org/abs/2410.19750 The Geometry of Concepts: Sparse Autoencoder Feature StructureSparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept universe has interesting structure at three levels: 1) The "atomicarxiv.org원자 수준: 단어 관계(예: "Austria:Vienna::Switz..

Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models

https://arxiv.org/abs/2403.19647 Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language ModelsWe introduce methods for discovering and applying sparse feature circuits. These are causally implicated subnetworks of human-interpretable features for explaining language model behaviors. Circuits identified in prior work consist of polysemantic and diffarxiv.org이 논문은..

HOB 기록 저장소 - HOB Shell Node 연결, 벽 연결까지 + Modal

이제 드디어 HOB에 끝이 보이네요작년에 "C++로 돈 벌어보자" 라고 시작한 CAE 프로그램 제작이 슬슬 마지막이 보입니다.Beam, Shell을 벽면에 접합하는 것으로 시작하여 Node 연결도 진행하고, Modal에 대한 시각화까지 진행하였네요. 교수님이 제가 궁금할 때 마다 궁금증을 해결해 주셨으니 만들 수 있었지만 임금 체불은 많이 섭섭했어요.........그래도 일단 거의 완성했으니 막바지 버그만 조금 잡으면 될 것 같네요 일단 사진으로 좀 확인하겠습니다.이 사진은 차량 Modal 해석입니다.전체적인 부분은 못 보여주니 그냥 해석이 돌아갔다, 내가 차량 해석도 돌려 봤다 정도 기록으로 남겨놓네요이제 인공지능만 하고 차량, 자율주행이랑은 거의 끝이니... 서브 프레임 Modal 해석입니다.전문 지..

FEM/HOB 2024.11.16

Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language Models - 논문리뷰

https://arxiv.org/abs/2402.19103 Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language ModelsLarge Language Models (LLMs) have shown impressive capabilities but still suffer from the issue of hallucinations. A significant type of this issue is the false premise hallucination, which we define as the phenomenon when LLMs generate hallucinated text..

One Agent To Rule Them All: Towards Multi-agent Conversational AI - 논문 리뷰

https://arxiv.org/abs/2203.07665 One Agent To Rule Them All: Towards Multi-agent Conversational AIThe increasing volume of commercially available conversational agents (CAs) on the market has resulted in users being burdened with learning and adopting multiple agents to accomplish their tasks. Though prior work has explored supporting a multitude of doarxiv.org 이 논문은 오래된 논문입니다.그래서 GPT 3,4와 같이 모든..

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings - 논문 리뷰

https://arxiv.org/abs/1607.06520 Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word EmbeddingsThe blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural lanarxiv.org 워드 임베딩은 데이터 내 성별 고..

Gender Bias in Neural Natural Language Processing - 논문 리뷰

https://arxiv.org/abs/1807.11714 Gender Bias in Neural Natural Language ProcessingWe examine whether neural natural language processing (NLP) systems reflect historical biases in training data. We define a general benchmark to quantify gender bias in a variety of neural NLP tasks. Our empirical evaluation with state-of-the-art neural coarxiv.org 여기선 단어를 교체하면서 임베딩 공간, attention score를 보고 편향을 확인했습..

728x90
728x90