TY - GEN
T1 - First Go, then Post-Explore
T2 - 15th International Conference on Agents and Artificial Intelligence, ICAART 2023
AU - Yang, Zhao
AU - Moerland, Thomas M.
AU - Preuss, Mike
AU - Plaat, Aske
PY - 2023
Y1 - 2023
N2 - Go-Explore achieved breakthrough performance on challenging reinforcement learning (RL) tasks with sparse rewards. The key insight of Go-Explore was that successful exploration requires an agent to first return to an interesting state (‘Go’), and only then explore into unknown terrain (‘Explore’). We refer to such exploration after a goal is reached as ‘post-exploration’. In this paper, we present a clear ablation study of post-exploration in a general intrinsically motivated goal exploration process (IMGEP) framework, that the Go-Explore paper did not show. We study the isolated potential of post-exploration, by turning it on and off within the same algorithm under both tabular and deep RL settings on both discrete navigation and continuous control tasks. Experiments on a range of MiniGrid and Mujoco environments show that post-exploration indeed helps IMGEP agents reach more diverse states and boosts their performance. In short, our work suggests that RL researchers should consider using post-exploration in IMGEP when possible since it is effective, method-agnostic, and easy to implement.
AB - Go-Explore achieved breakthrough performance on challenging reinforcement learning (RL) tasks with sparse rewards. The key insight of Go-Explore was that successful exploration requires an agent to first return to an interesting state (‘Go’), and only then explore into unknown terrain (‘Explore’). We refer to such exploration after a goal is reached as ‘post-exploration’. In this paper, we present a clear ablation study of post-exploration in a general intrinsically motivated goal exploration process (IMGEP) framework, that the Go-Explore paper did not show. We study the isolated potential of post-exploration, by turning it on and off within the same algorithm under both tabular and deep RL settings on both discrete navigation and continuous control tasks. Experiments on a range of MiniGrid and Mujoco environments show that post-exploration indeed helps IMGEP agents reach more diverse states and boosts their performance. In short, our work suggests that RL researchers should consider using post-exploration in IMGEP when possible since it is effective, method-agnostic, and easy to implement.
UR - https://www.scopus.com/pages/publications/85182557025
UR - https://www.scitepress.org/ProceedingsDetails.aspx?ID=gVvv3d8E4ME=&t=1
UR - https://www.scopus.com/pages/publications/85182557025#tab=citedBy
U2 - 10.5220/0011612800003393
DO - 10.5220/0011612800003393
M3 - Conference contribution
SN - 9789897586231
VL - 2
T3 - International Conference on Agents and Artificial Intelligence
SP - 27
EP - 34
BT - ICAART 2023 - Proceedings of the 15th International Conference on Agents and Artificial Intelligence
A2 - Rocha, A.
A2 - Steels, L.
A2 - van den Herik, J.
PB - SciTePress
Y2 - 22 February 2023 through 24 February 2023
ER -