반응형

인공지능/논문 리뷰 or 진행 248

Few-shot 관련 논문

https://arxiv.org/abs/2408.04392 Open-domain Implicit Format Control for Large Language Model GenerationControlling the format of outputs generated by large language models (LLMs) is a critical functionality in various applications. Current methods typically employ constrained decoding with rule-based automata or fine-tuning with manually crafted format instarxiv.orgLLM이 다양한 사용자의 출력 형식 요구를 충족하지 ..

데이터 기반 질환 예측 논문 정리 - 3

https://advanced.onlinelibrary.wiley.com/doi/10.1002/advs.202412775여기서도 Transformer 기반으로 멀티 오믹스(Multi-omics) 데이터를 활용하여 만성 질환을 조기 예측합니다. 혈액 검사 데이터는 클러스터링하고, Multi-omics 데이터는 Transformer 기반으로 모델 학습하네요 연구 목적혈액검사와 multi-omics 데이터를 통합하여 저비용 고정밀 만성질환 조기 예측 시스템 개발대상 데이터- 고산 거주자 160명: 혈액·소변 → 전사체, 단백질체, 대사체 수집- 일반 임상 환자 314만 명의 20년 혈액 검사 및 진단 정보모델명Omicsformer – Transformer 기반 multi-omics 통합 딥러닝 모델방법론 핵심..

데이터 기반 질환 예측 논문 정리 - 2

https://arxiv.org/abs/2410.11910 Explainable AI Methods for Multi-Omics Analysis: A SurveyAdvancements in high-throughput technologies have led to a shift from traditional hypothesis-driven methodologies to data-driven approaches. Multi-omics refers to the integrative analysis of data derived from multiple 'omes', such as genomics, proteomics,arxiv.orgexplainable는 딱히 필요 없어서... 연구 배경- Multi-Omics..

데이터 기반 질환 예측 논문 정리 - 1

어 음갑자기 하게 되어서...일단... https://mhealth.jmir.org/2021/5/e22591 Acute Exacerbation of a Chronic Obstructive Pulmonary Disease Prediction System Using Wearable Device Data, Machine Learning, anWith rapid progress of medicine, many treatments and medications have been developed, and relationships between lifestyle and disease have been elucidated. Precision medicine involves determining the best trea..

MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data Uncertainty

https://arxiv.org/abs/2408.06816 MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data UncertaintyDespite the massive advancements in large language models (LLMs), they still suffer from producing plausible but incorrect responses. To improve the reliability of LLMs, recent research has focused on uncertainty quantification to predict whether a responsarxiv.org기존 연구들은 응답이 맞는지만 확인, 명..

Adversarial Attacks in NLP 관련 논문 정리 - 6

https://arxiv.org/abs/2503.11517 Prompt Injection Detection and Mitigation via AI Multi-Agent NLP FrameworksPrompt injection constitutes a significant challenge for generative AI systems by inducing unintended outputs. We introduce a multi-agent NLP framework specifically designed to address prompt injection vulnerabilities through layered detection and enforcemarxiv.org이 것도 Agent 구조인데...결국 많은 필..

Adversarial Attacks in NLP 관련 논문 정리 - 5

https://aclanthology.org/2025.findings-naacl.123/ Attention Tracker: Detecting Prompt Injection Attacks in LLMsKuo-Han Hung, Ching-Yun Ko, Ambrish Rawat, I-Hsin Chung, Winston H. Hsu, Pin-Yu Chen. Findings of the Association for Computational Linguistics: NAACL 2025. 2025.aclanthology.org이 논문은 Attention 패턴 관점에서 prompt injection 공격 메커니즘을 분석합니다.black box 모델에선 불 가능한 조건이 되는 거죠...원래는 Instruction에 높은 ..

Adversarial Attacks in NLP 관련 논문 정리 - 4

https://arxiv.org/abs/2401.15897 Red-Teaming for Generative AI: Silver Bullet or Security Theater?In response to rising concerns surrounding the safety, security, and trustworthiness of Generative AI (GenAI) models, practitioners and regulators alike have pointed to AI red-teaming as a key component of their strategies for identifying and mitigating tharxiv.orgSurvey 논문 이네요 미 백악관에서 행정명령으로 발표한 AI..

Adversarial Attacks in NLP 관련 논문 정리 - 3

https://www.semanticscholar.org/paper/A-Survey-of-Adversarial-Defenses-and-Robustness-in-Goyal-Doddapaneni/83cebf919635504786fc220d569284842b0f0a09 https://www.semanticscholar.org/paper/A-Survey-of-Adversarial-Defenses-and-Robustness-in-Goyal-Doddapaneni/83cebf919635504786fc220d569284842b0f0a09 www.semanticscholar.org 서베이 논문은 너무 길어서 적당히 보고 넘어 가는 것으로...방어 방법에 대한 논문이었습니다학습 - 데이터 증강, 정규화, GAN, VAT,..

Adversarial Attacks in NLP 관련 논문 정리 - 2

https://arxiv.org/abs/2004.14174 Reevaluating Adversarial Examples in Natural LanguageState-of-the-art attacks on NLP models lack a shared definition of a what constitutes a successful attack. We distill ideas from past work into a unified framework: a successful natural language adversarial example is a perturbation that fools the model anarxiv.org여기선 문장의 의미, 문법, 가시적인지를 확인하며 공격을 진행합니다. 단어 유사도, ..

728x90
728x90