Skip to content

Trust Region Policy Optimization

Authors: John Schulman, Sergey Levine, Philipp Moritz, Michael Jordan, Pieter Abbeel

Published: 2015 (Conference Paper)

Source: International Conference on Machine Learning (ICML)

Algorithm: TRPO

arXiv: 1502.05477

DOI: 10.5555/3045118.3045319

Summary

Introduces TRPO, which guarantees monotonic policy improvement by constraining each update to a KL-divergence trust region. Enables stable training on complex continuous control tasks without manual learning rate tuning.

Abstract

We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.

Tags

  • Reinforcement learning

  • Policy optimization

  • Trust region methods

  • Continuous control