Skip to content

Fast Efficient Hyperparameter Tuning for Policy Gradient Methods

Authors: Supratik Paul, Vitaly Kurin, Shimon Whiteson

Published: 2019 (Conference Paper)

Source: Advances in Neural Information Processing Systems

Algorithm: HOOF

arXiv: 1902.06583

Summary

Presents HOOF, a one-run hyperparameter tuning method for policy-gradient reinforcement learning. The method uses trajectories already collected by the learner to rank candidate policy updates via importance-weighted one-step improvement estimates, reducing the extra sampling burden of grid search or population-based tuning.

Abstract

The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that learn optimal schedules for hyperparameters instead of fixed settings can yield better results, but are also sample inefficient and computationally expensive. In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a gradient-free algorithm that requires no more than one training run to automatically adapt the hyperparameter that affect the policy update directly through the gradient. The main idea is to use existing trajectories sampled by the policy gradient method to optimise a one-step improvement objective, yielding a sample and computationally efficient algorithm that is easy to implement. Our experimental results across multiple domains and algorithms show that using HOOF to learn these hyperparameter schedules leads to faster learning with improved performance.

Tags

  • HOOF

  • Hyperparameter optimization

  • Policy gradients

  • Reinforcement learning

  • Sample efficiency

  • Meta-learning

  • Importance sampling

  • Automatic tuning