Conference proceeding
Secure Retrieval-Augmented Generation Against Poisoning Attacks
IEEE International Conference on Big Data, pp 1799-1806
08 Dec 2025
Abstract
Large language models (LLMs) have transformed natural language processing (NLP), enabling applications from content generation to decision support. Retrieval-Augmented Generation (RAG) improves LLMs by incorporating external knowledge but also introduces security risks, particularly from data poisoning, where the attacker injects poisoned texts into the knowledge database to manipulate system outputs. While various defenses have been proposed, they often struggle against advanced attacks. To address this, we introduce RAGuard, a detection framework designed to identify poisoned texts. RAGuard first expands the retrieval scope to increase the proportion of clean texts, reducing the likelihood of retrieving poisoned content. It then applies chunk-wise perplexity filtering to detect abnormal variations and text similarity filtering to flag highly similar texts. This non-parametric approach enhances RAG security, and experiments on large-scale datasets demonstrate its effectiveness in detecting and mitigating poisoning attacks, including strong adaptive attacks.
Metrics
1 Record Views
Details
- Title
- Secure Retrieval-Augmented Generation Against Poisoning Attacks
- Creators
- Zirui Cheng - National University of SingaporeJikai Sun - National University of SingaporeAnjun Gao - University of Louisville HospitalYueyang Quan - University of North TexasZhuqing Liu - University of North TexasXiaohua Hu - Drexel UniversityMinghong Fang - University of Louisville Hospital
- Publication Details
- IEEE International Conference on Big Data, pp 1799-1806
- Conference
- 2025 IEEE International Conference on Big Data (BigData) (Macau, China)
- Publisher
- IEEE
- Resource Type
- Conference proceeding
- Language
- English
- Academic Unit
- Information Science
- Other Identifier
- 991022166391704721