TY - JOUR
T1 - Value function discovery in Markov Decision Processes with evolutionary algorithms
AU - Onderwater, M.
AU - Bhulai, S.
AU - van der Mei, R.D.
PY - 2016
Y1 - 2016
N2 - In this paper, we introduce a novel method for the discovery of value functions for Markov decision processes (MDPs). This method, which we call value function discovery (VFD), is based on ideas from the evolutionary algorithm field. VFDs key feature is that it discovers descriptions of value functions that are algebraic in nature. This feature is unique, because the descriptions include the model parameters of the MDP. The algebraic expression of the value function discovered by VFD can be used in several scenarios, e.g., conversion to a policy (with one-step policy improvement) or control of systems with time-varying parameters. The work in this paper is a first step toward exploring potential usage scenarios of discovered value functions. We give a detailed description of VFD and illustrate its application on an example MDP. For this MDP, we let VFD discover an algebraic description of a value function that closely resembles the optimal value function. The discovered value function is then used to obtain a policy, which we compare numerically to the optimal policy of the MDP. The resulting policy shows near-optimal performance on a wide range of model parameters. Finally, we identify and discuss future application scenarios of discovered value functions.
AB - In this paper, we introduce a novel method for the discovery of value functions for Markov decision processes (MDPs). This method, which we call value function discovery (VFD), is based on ideas from the evolutionary algorithm field. VFDs key feature is that it discovers descriptions of value functions that are algebraic in nature. This feature is unique, because the descriptions include the model parameters of the MDP. The algebraic expression of the value function discovered by VFD can be used in several scenarios, e.g., conversion to a policy (with one-step policy improvement) or control of systems with time-varying parameters. The work in this paper is a first step toward exploring potential usage scenarios of discovered value functions. We give a detailed description of VFD and illustrate its application on an example MDP. For this MDP, we let VFD discover an algebraic description of a value function that closely resembles the optimal value function. The discovered value function is then used to obtain a policy, which we compare numerically to the optimal policy of the MDP. The resulting policy shows near-optimal performance on a wide range of model parameters. Finally, we identify and discuss future application scenarios of discovered value functions.
U2 - 10.1109/TSMC.2015.2475716
DO - 10.1109/TSMC.2015.2475716
M3 - Article
SN - 2168-2216
VL - 46
SP - 1190
EP - 1201
JO - IEEE Transactions on Systems, Man and Cybernetics: Systems
JF - IEEE Transactions on Systems, Man and Cybernetics: Systems
IS - 9
ER -