dc.contributor.author |
Konnov I. |
|
dc.date.accessioned |
2018-09-17T21:03:20Z |
|
dc.date.available |
2018-09-17T21:03:20Z |
|
dc.date.issued |
2003 |
|
dc.identifier.issn |
1055-6788 |
|
dc.identifier.uri |
https://dspace.kpfu.ru/xmlui/handle/net/134459 |
|
dc.description.abstract |
In this article, we consider convergence properties of the normalized subgradient method which employs the stepsize rule based on a priori knowledge of the optimal value of the cost function. We show that the normalized subgradients possess additional information about the problem under consideration, which can be used for improving convergence rates based on the usual subgradient properties. We also present several convergence results for inexact versions of the method. |
|
dc.relation.ispartofseries |
Optimization Methods and Software |
|
dc.subject |
Convergence rates |
|
dc.subject |
Quasiconvex minimization |
|
dc.subject |
Subgradient method |
|
dc.title |
On convergence properties of a subgradient method |
|
dc.type |
Article |
|
dc.relation.ispartofseries-issue |
1 |
|
dc.relation.ispartofseries-volume |
18 |
|
dc.collection |
Публикации сотрудников КФУ |
|
dc.relation.startpage |
53 |
|
dc.source.id |
SCOPUS10556788-2003-18-1-SID0038021190 |
|