Document detail
ID

oai:arXiv.org:2410.13761

Topic
Computer Science - Machine Learnin...
Author
Zhang, Guibin Dong, Haonan Zhang, Yuchen Li, Zhixun Chen, Dingshuo Wang, Kai Chen, Tianlong Liang, Yuxuan Cheng, Dawei Wang, Kun
Category

Computer Science

Year

2024

listing date

10/23/2024

Keywords
gder data pruning training
Metrics

Abstract

Training high-quality deep models necessitates vast amounts of data, resulting in overwhelming computational and memory demands.

Recently, data pruning, distillation, and coreset selection have been developed to streamline data volume by retaining, synthesizing, or selecting a small yet informative subset from the full set.

Among these methods, data pruning incurs the least additional training cost and offers the most practical acceleration benefits.

However, it is the most vulnerable, often suffering significant performance degradation with imbalanced or biased data schema, thus raising concerns about its accuracy and reliability in on-device deployment.

Therefore, there is a looming need for a new data pruning paradigm that maintains the efficiency of previous practices while ensuring balance and robustness.

Unlike the fields of computer vision and natural language processing, where mature solutions have been developed to address these issues, graph neural networks (GNNs) continue to struggle with increasingly large-scale, imbalanced, and noisy datasets, lacking a unified dataset pruning solution.

To achieve this, we introduce a novel dynamic soft-pruning method, GDeR, designed to update the training ``basket'' during the process using trainable prototypes.

GDeR first constructs a well-modeled graph embedding hypersphere and then samples \textit{representative, balanced, and unbiased subsets} from this embedding space, which achieves the goal we called Graph Training Debugging.

Extensive experiments on five datasets across three GNN backbones, demonstrate that GDeR (I) achieves or surpasses the performance of the full dataset with 30%~50% fewer training samples, (II) attains up to a 2.81x lossless training speedup, and (III) outperforms state-of-the-art pruning methods in imbalanced training and noisy training scenarios by 0.3%~4.3% and 3.6%~7.8%, respectively.

;Comment: NeurIPS 2024

Zhang, Guibin,Dong, Haonan,Zhang, Yuchen,Li, Zhixun,Chen, Dingshuo,Wang, Kai,Chen, Tianlong,Liang, Yuxuan,Cheng, Dawei,Wang, Kun, 2024, GDeR: Safeguarding Efficiency, Balancing, and Robustness via Prototypical Graph Pruning

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Multiplexed live-cell imaging for drug responses in patient-derived organoid models of cancer
cell organoid patient-derived kinetic system effects cancer pdo models