Document detail
ID

oai:arXiv.org:2403.13355

Topic
Computer Science - Cryptography an... Computer Science - Artificial Inte...
Author
Li, Yanzhou Li, Tianlin Chen, Kangjie Zhang, Jian Liu, Shangqing Wang, Wenhan Zhang, Tianwei Liu, Yang
Category

Computer Science

Year

2024

listing date

3/27/2024

Keywords
editing model injection performance
Metrics

Abstract

Mainstream backdoor attack methods typically demand substantial tuning data for poisoning, limiting their practicality and potentially degrading the overall performance when applied to Large Language Models (LLMs).

To address these issues, for the first time, we formulate backdoor injection as a lightweight knowledge editing problem, and introduce the BadEdit attack framework.

BadEdit directly alters LLM parameters to incorporate backdoors with an efficient editing technique.

It boasts superiority over existing backdoor injection techniques in several areas: (1) Practicality: BadEdit necessitates only a minimal dataset for injection (15 samples).

(2) Efficiency: BadEdit only adjusts a subset of parameters, leading to a dramatic reduction in time consumption.

(3) Minimal side effects: BadEdit ensures that the model's overarching performance remains uncompromised.

(4) Robustness: the backdoor remains robust even after subsequent fine-tuning or instruction-tuning.

Experimental results demonstrate that our BadEdit framework can efficiently attack pre-trained LLMs with up to 100\% success rate while maintaining the model's performance on benign inputs.

;Comment: ICLR 2024

Li, Yanzhou,Li, Tianlin,Chen, Kangjie,Zhang, Jian,Liu, Shangqing,Wang, Wenhan,Zhang, Tianwei,Liu, Yang, 2024, BadEdit: Backdooring large language models by model editing

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Use of ileostomy versus colostomy as a bridge to surgery in left-sided obstructive colon cancer: retrospective cohort study
deviating 0 versus surgery bridge colon study left-sided obstructive stoma colostomy cancer cent