Document detail
ID

oai:arXiv.org:2405.03097

Topic
Computer Science - Machine Learnin... Computer Science - Artificial Inte... Computer Science - Computation and...
Author
Barbulescu, George-Octavian Triantafillou, Peter
Category

Computer Science

Year

2024

listing date

5/8/2024

Keywords
textual
Metrics

Abstract

LLMs have been found to memorize training textual sequences and regurgitate verbatim said sequences during text generation time.

This fact is known to be the cause of privacy and related (e.g., copyright) problems.

Unlearning in LLMs then takes the form of devising new algorithms that will properly deal with these side-effects of memorized data, while not hurting the model's utility.

We offer a fresh perspective towards this goal, namely, that each textual sequence to be forgotten should be treated differently when being unlearned based on its degree of memorization within the LLM.

We contribute a new metric for measuring unlearning quality, an adversarial attack showing that SOTA algorithms lacking this perspective fail for privacy, and two new unlearning methods based on Gradient Ascent and Task Arithmetic, respectively.

A comprehensive performance evaluation across an extensive suite of NLP tasks then mapped the solution space, identifying the best solutions under different scales in model capacities and forget set sizes and quantified the gains of the new approaches.

;Comment: Published as a conference paper at ICML 2024

Barbulescu, George-Octavian,Triantafillou, Peter, 2024, To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models

Document

Open

Share

Source

Articles recommended by ES/IODE AI

An Updated Overview of Existing Cancer Databases and Identified Needs
advancements insights assess review lipidomics glycomics proteomics databases research cancer