Document detail
ID

oai:arXiv.org:2408.17286

Topic
Computer Science - Machine Learnin... Computer Science - Artificial Inte...
Author
Su, Xihong Grand-Clément, Julien Petrik, Marek
Category

Computer Science

Year

2024

listing date

12/25/2024

Keywords
risk risk-averse
Metrics

Abstract

Optimizing risk-averse objectives in discounted MDPs is challenging because most models do not admit direct dynamic programming equations and require complex history-dependent policies.

In this paper, we show that the risk-averse {\em total reward criterion}, under the Entropic Risk Measure (ERM) and Entropic Value at Risk (EVaR) risk measures, can be optimized by a stationary policy, making it simple to analyze, interpret, and deploy.

We propose exponential value iteration, policy iteration, and linear programming to compute optimal policies.

Compared with prior work, our results only require the relatively mild condition of transient MDPs and allow for {\em both} positive and negative rewards.

Our results indicate that the total reward criterion may be preferable to the discounted criterion in a broad range of risk-averse reinforcement learning domains.

Su, Xihong,Grand-Clément, Julien,Petrik, Marek, 2024, Risk-averse Total-reward MDPs with ERM and EVaR

Document

Open

Share

Source

Articles recommended by ES/IODE AI