Document detail
ID

oai:arXiv.org:2406.11087

Topic
Computer Science - Cryptography an... Computer Science - Artificial Inte... Computer Science - Computation and... Computer Science - Machine Learnin...
Author
Liu, Yanming Peng, Xinyue Zhang, Yuwei Ke, Xiaolan Deng, Songhang Cao, Jiannan Ma, Chen Fu, Mengchen Du, Tianyu Cheng, Sheng Wang, Xun Yin, Jianwei Zhang, Xuhong
Category

Computer Science

Year

2024

listing date

2/26/2025

Keywords
user privacy memory
Metrics

Abstract

Large language models have repeatedly shown outstanding performance across diverse applications.

However, deploying these models can inadvertently risk user privacy.

The significant memory demands during training pose a major challenge in terms of resource consumption.

This substantial size places a heavy load on memory resources, raising considerable practical concerns.

In this paper, we introduce DP-MemArc, a novel training framework aimed at reducing the memory costs of large language models while emphasizing the protection of user data privacy.

DP-MemArc incorporates side network or reversible network designs to support a variety of differential privacy memory-efficient fine-tuning schemes.

Our approach not only achieves about 2.5 times in memory optimization but also ensures robust privacy protection, keeping user data secure and confidential.

Extensive experiments have demonstrated that DP-MemArc effectively provides differential privacy-efficient fine-tuning across different task scenarios.

;Comment: Fix metadata error

Liu, Yanming,Peng, Xinyue,Zhang, Yuwei,Ke, Xiaolan,Deng, Songhang,Cao, Jiannan,Ma, Chen,Fu, Mengchen,Du, Tianyu,Cheng, Sheng,Wang, Xun,Yin, Jianwei,Zhang, Xuhong, 2024, DP-MemArc: Differential Privacy Transfer Learning for Memory Efficient Language Models

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Hespi: A pipeline for automatically detecting information from hebarium specimen sheets
science recognition institutional detects text-based text pipeline specimen