Abstract
Computational agents often need to learn policies that involve many control variables, e.g., a robot needs to control several joints simultaneously. Learning a policy with a high number of parameters, however, usually requires a large number of training samples. We introduce a reinforcement learning method for sampleefficient policy search that exploits correlations between control variables. Such correlations are particularly frequent in motor skill learning tasks. The introduced method uses Variational Inference to estimate policy parameters, while at the same time uncovering a lowdimensional latent space of controls. Prior knowledge about the task and the structure of the learning agent can be provided by specifying groups of potentially correlated parameters. This information is then used to impose sparsity constraints on the mapping between the high-dimensional space of controls and a lowerdimensional latent space. In experiments with a simulated bi-manual manipulator, the new approach effectively identifies synergies between joints, performs efficient low-dimensional policy search, and outperforms state-of-the-art policy search methods.
Original language | English |
---|---|
Title of host publication | 30th AAAI Conference on Artificial Intelligence, AAAI 2016 |
Publisher | AAAI Press |
Pages | 1911-1918 |
ISBN (Electronic) | 9781577357605 |
Publication status | Published - 2016 |
Externally published | Yes |
Event | 30th AAAI Conference on Artificial Intelligence, AAAI 2016 - Phoenix, United States Duration: 12 Feb 2016 → 17 Feb 2016 |
Conference
Conference | 30th AAAI Conference on Artificial Intelligence, AAAI 2016 |
---|---|
Country/Territory | United States |
City | Phoenix |
Period | 12/02/16 → 17/02/16 |