oai:arXiv.org:2406.04534
Computer Science
2024
6/12/2024
Offline reinforcement learning (RL) is a compelling paradigm to extend RL's practical utility by leveraging pre-collected, static datasets, thereby avoiding the limitations associated with collecting online interactions.
The major difficulty in offline RL is mitigating the impact of approximation errors when encountering out-of-distribution (OOD) actions; doing so ineffectively will lead to policies that prefer OOD actions, which can lead to unexpected and potentially catastrophic results.
Despite the variety of works proposed to address this issue, they tend to excessively suppress the value function in and around OOD regions, resulting in overly pessimistic value estimates.
In this paper, we propose a novel framework called Strategically Conservative Q-Learning (SCQ) that distinguishes between OOD data that is easy and hard to estimate, ultimately resulting in less conservative value estimates.
Our approach exploits the inherent strengths of neural networks to interpolate, while carefully navigating their limitations in extrapolation, to obtain pessimistic yet still property calibrated value estimates.
Theoretical analysis also shows that the value function learned by SCQ is still conservative, but potentially much less so than that of Conservative Q-learning (CQL).
Finally, extensive evaluation on the D4RL benchmark tasks shows our proposed method outperforms state-of-the-art methods.
Our code is available through \url{https://github.com/purewater0901/SCQ}.
Shimizu, Yutaka,Hong, Joey,Levine, Sergey,Tomizuka, Masayoshi, 2024, Strategically Conservative Q-Learning