Аннотации:
In this article, we consider convergence properties of the normalized subgradient method which employs the stepsize rule based on a priori knowledge of the optimal value of the cost function. We show that the normalized subgradients possess additional information about the problem under consideration, which can be used for improving convergence rates based on the usual subgradient properties. We also present several convergence results for inexact versions of the method.