Skip to content

On Linear Convergence of Policy Gradient Methods for Finite MDPs

Authors: Jalaj Bhandari, Daniel Russo

Published: 2020 ()

Algorithm: Policy Gradient

arXiv: 2007.11120

Summary

Abstract

We revisit the finite time analysis of policy gradient methods in the one of the simplest settings: finite state and action MDPs with a policy class consisting of all stochastic policies and with exact gradient evaluations. There has been some recent work viewing this setting as an instance of smooth non-linear optimization problems and showing sub-linear convergence rates with small step-sizes. Here, we take a different perspective based on connections with policy iteration and show that many variants of policy gradient methods succeed with large step-sizes and attain a linear rate of convergence.