Document detail
ID

oai:arXiv.org:2404.09247

Topic
Computer Science - Machine Learnin... Statistics - Machine Learning
Author
Yang, Yifan Payani, Ali Naghizadeh, Parinaz
Category

Computer Science

Year

2024

listing date

7/31/2024

Keywords
learning data
Metrics

Abstract

Generalization error bounds from learning theory provide statistical guarantees on how well an algorithm will perform on previously unseen data.

In this paper, we characterize the impacts of data non-IIDness due to censored feedback (a.k.a. selective labeling bias) on such bounds.

We first derive an extension of the well-known Dvoretzky-Kiefer-Wolfowitz (DKW) inequality, which characterizes the gap between empirical and theoretical CDFs given IID data, to problems with non-IID data due to censored feedback.

We then use this CDF error bound to provide a bound on the generalization error guarantees of a classifier trained on such non-IID data.

We show that existing generalization error bounds (which do not account for censored feedback) fail to correctly capture the model's generalization guarantees, verifying the need for our bounds.

We further analyze the effectiveness of (pure and bounded) exploration techniques, proposed by recent literature as a way to alleviate censored feedback, on improving our error bounds.

Together, our findings illustrate how a decision maker should account for the trade-off between strengthening the generalization guarantees of an algorithm and the costs incurred in data collection when future data availability is limited by censored feedback.

Yang, Yifan,Payani, Ali,Naghizadeh, Parinaz, 2024, Generalization Error Bounds for Learning under Censored Feedback

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Should we consider Systemic Inflammatory Response Index (SIRI) as a new diagnostic marker for rectal cancer?
inflammation rectal surgery overall survival complication significantly diagnostic value cancer rectal 38 siri