oai:arXiv.org:2406.11087
Computer Science
2024
21-08-2024
Large language models have repeatedly shown outstanding performance across diverse applications.
However, deploying these models can inadvertently risk user privacy.
The significant memory demands during training pose a major challenge in terms of resource consumption.
This substantial size places a heavy load on memory resources, raising considerable practical concerns.
In this paper, we introduce DP-MemArc, a novel training framework aimed at reducing the memory costs of large language models while emphasizing the protection of user data privacy.
DP-MemArc incorporates side network or reversible network designs to support a variety of differential privacy memory-efficient fine-tuning schemes.
Our approach not only achieves in memory optimization but also ensures robust privacy protection, keeping user data secure and confidential.
Extensive experiments have demonstrated that DP-MemArc effectively provides differential privacy-efficient fine-tuning across different task scenarios.
;Comment: 9 pages second version
Liu, Yanming,Peng, Xinyue,Zhang, Yuwei,Ke, Xiaolan,Deng, Songhang,Cao, Jiannan,Ma, Chen,Fu, Mengchen,Zhang, Xuhong,Cheng, Sheng,Wang, Xun,Yin, Jianwei,Du, Tianyu, 2024, DP-MemArc: Differential Privacy Transfer Learning for Memory Efficient Language Models