Skip to content

Disciplined Nonlinear Programming

Authors: Daniel Cederberg, William Zhang, Parth Nobel, Stephen Boyd

Published: 2025 (Preprint)

Source: Preprint

Algorithm: DNLP

Summary

Extends the disciplined-programming idea beyond convexity by defining grammar and canonicalization rules for nonlinear programs that mix smooth and nonsmooth atoms. The practical value is making NLP modeling more parser-friendly and solver-ready, especially inside CVXPY.

Abstract

We introduce disciplined nonlinear programming (DNLP), a syntax for specifying nonlinear programming problems. DNLP is inspired by disciplined convex programming (DCP) and allows smooth functions to be freely mixed with nonsmooth convex and concave functions, with rules governing how the nonsmooth functions can be used. Problems expressed in DNLP form can be automatically canonicalized to a standard nonlinear programming (NLP) form and passed to a suitable NLP solver. As in DCP, the canonicalization relaxes nonsmooth convex and concave functions in a lossless way, allowing them to be handled by NLP solvers that require smooth functions. In addition to extending NLP to include useful nondifferentiable convex and concave functions, transforming the original problem to an equivalent NLP form offers several advantages, including simpler problem initialization. We describe the language and our open-source implementation of DNLP as an extension of CVXPY, a parser for DCP.

Tags

  • Disciplined nonlinear programming

  • Nonlinear programming

  • CVXPY

  • Canonicalization

  • Nonsmooth optimization

  • Optimization modeling