Skip to content

Simple Policy Optimization

Authors: Zhengpeng Xie, Qiang Zhang, Fan Yang, Marco Hutter, Renjing Xu

Published: 2024 (Preprint)

Source: International Conference on Machine Learning (ICML)

Algorithm: SPO

arXiv: 2401.16025

DOI: 10.5555/3780338.3783085

Summary

Proposes a modified policy gradient loss that achieves better performance than PPO while maintaining simplicity.

Abstract

Model-free reinforcement learning algorithms have seen remarkable progress, but key challenges remain. Trust Region Policy Optimization (TRPO) is known for ensuring monotonic policy improvement through conservative updates within a trust region, backed by strong theoretical guarantees. However, its reliance on complex second-order optimization limits its practical efficiency. Proximal Policy Optimization (PPO) addresses this by simplifying TRPO's approach using ratio clipping, improving efficiency but sacrificing some theoretical robustness. This raises a natural question: Can we combine the strengths of both methods? In this paper, we introduce Simple Policy Optimization (SPO), a novel unconstrained first-order algorithm. By slightly modifying the policy loss used in PPO, SPO can achieve the best of both worlds. Our new objective improves upon ratio clipping, offering stronger theoretical properties and better constraining the probability ratio within the trust region. Empirical results demonstrate that SPO outperforms PPO with a simple implementation, particularly for training large, complex network architectures end-to-end.

Tags

  • Reinforcement learning

  • Policy optimization

  • Proximal policy optimization

  • PPO

  • Simple policy optimization

  • SPO