Perturbed Gradient Descent Algorithms Are Small-Disturbance Input-to-State Stable¶
Authors: Leilei Cui, Zhong-Ping Jiang, Eduardo D. Sontag, Richard D. Braatz
Published: 2025 (Preprint)
Source: arXiv
Algorithm: SGD Input-to-State Stability
arXiv: 2507.02131
Summary¶
Establishes small-disturbance input-to-state stability (ISS) for perturbed gradient descent under a generalized nonlinear Polyak-Lojasiewicz condition. Shows that LQR policy gradient and natural policy gradient algorithms inherit this robustness, connecting systems-theory stability notions to modern RL convergence analysis.
Abstract¶
This article investigates the robustness of gradient descent algorithms under perturbations. The concept of small-disturbance input-to-state stability (ISS) for discrete-time nonlinear dynamical systems is introduced, along with its Lyapunov characterization. The conventional linear Polyak-Lojasiewicz (PL) condition is then extended to a nonlinear version, and it is shown that the gradient descent algorithm is small-disturbance ISS provided the objective function satisfies the generalized nonlinear PL condition. This small-disturbance ISS property guarantees that the gradient descent algorithm converges to a small neighborhood of the optimum under sufficiently small perturbations. As a direct application of the developed framework, we demonstrate that the LQR cost satisfies the generalized nonlinear PL condition, thereby establishing that the policy gradient algorithm for LQR is small-disturbance ISS. Additionally, other popular policy gradient algorithms, including natural policy gradient and Gauss-Newton method, are also proven to be small-disturbance ISS.
Links¶
Primary
Standard
Tags¶
-
Stochastic gradient descent
-
Input-to-state stability
-
Nonlinear optimization
-
Policy gradient
-
LQR
-
Robustness