Online Regret Bounds for Satisficing in Markov Decision Processes

Hossein Hajiabolhassan, Ronald Ortner

Research output: Contribution to journalArticleResearchpeer-review

Abstract

We consider general reinforcement learning under the average reward criterion in Markov decision processes (MDPs), when the learner’s goal is not to learn an optimal policy, but accepts any policy whose average reward is above a given satisfaction level 𝜎
. We show that with this more modest objective, it is possible to give algorithms that only have constant regret with respect to the level 𝜎
, provided that there is a policy above this level. This is a generalization of known results from the bandit setting to MDPs. Further, we present a more general algorithm that achieves the best of both worlds: If the optimal policy has average reward above 𝜎
, this algorithm has bounded regret with respect to 𝜎
. On the other hand, if all policies are below 𝜎
, then the expected regret with respect to the optimal policy is bounded as for the UCRL2 algorithm.
Original languageEnglish
JournalMathematics of Operations Research
Volume??? Stand: 7. Juli 2025
Issue number??? Stand: 7. Juli 2025
DOIs
Publication statusPublished - 9 May 2025

Cite this