Low-Variance Policy Gradient Estimation with World Models

Michal Nauman, Floris Den Hengst

Research output: Contribution to JournalArticleAcademic

1 Downloads (Pure)

Abstract

In this paper, we propose World Model Policy Gradient (WMPG), an approach to reduce the variance of policy gradient estimates using learned world models (WM's). In WMPG, a WM is trained online and used to imagine trajectories. The imagined trajectories are used in two ways. Firstly, to calculate a without-replacement estimator of the policy gradient. Secondly, the return of the imagined trajectories is used as an informed baseline. We compare the proposed approach with AC and MAC on a set of environments of increasing complexity (CartPole, LunarLander and Pong) and find that WMPG has better sample efficiency. Based on these results, we conclude that WMPG can yield increased sample efficiency in cases where a robust latent representation of the environment can be learned.
Original languageEnglish
JournalarXiv.org
Publication statusPublished - 29 Oct 2020

Keywords

  • stat.ML
  • cs.AI
  • cs.LG

Fingerprint Dive into the research topics of 'Low-Variance Policy Gradient Estimation with World Models'. Together they form a unique fingerprint.

Cite this