A universal method of gradient type for convex minimization problems with relative error is studied, and new results on the linear convergence rate of specific versions of the subgradient method for problems with a sharp minimum and some generalizations of convexity are obtained. A new approach to the choice of parameters and the stopping rule of an adaptive version of the method of similar triangles on a class of minimization problems for convex positively homogeneous functionals is proposed. As a consequence, an analog of the universal gradient method for problems with relative error is obtained and an optimal estimate of its convergence rate on a chosen class of problems is proved. An example of the results of numerical experiments illustrating the possibility of improving the quality of the approximate solution produced by the method as compared to a theoretical estimate due to the introduced adaptive stopping rule is given. A version of the subgradient method for minimization problems with weakly $\beta$-quasiconvex Lipschitz-continuous functionals in the case of a sharp minimum is considered, and a linear convergence rate is proved. Finally, a version of the subgradient method is proposed that converges at a linear rate on a class of problems of minimizing quasiconvex Holder-continuous functionals with a sharp minimum. The applicability of this result for functionals with a weak sharp minimum (Holderian growth) is shown and a corollary of this result is formulated for problems with relative error.
Keywords: relative error, convex positively homogeneous functional, weak $\beta$-quasiconvex functional, subgradient method, Lipschitz-continuous functional, quasiconvex functional
Received July 19, 2021
Revised September 1, 2021
Accepted September 6, 2021
Funding Agency: The research in Sects. 2 and 3 was supported by the Russian President’s Grant for Young Russian Scientists no. MK-15.2020.1. Stonyakin’s results in Sect. 4 were supported by the Russian Science Foundation grant (project no. 27-31-30005).
Fedor Sergeevich Stonyakin, Cand. Sci. (Phys.-Math.), V.I.Vernadsky Crimean Federal University, Simferopol, Republic of Crimea, 295007 Russia, e-mail: fedyor@mail.ru
Seydamet Serverovich Ablaev, doctoral student, V.I.Vernadsky Crimean Federal University, Simferopol, Republic of Crimea, 295007 Russia, e-mail: seydamet.ablaev@yandex.ru
Inna Viktorovna Baran, Cand. Sci. (Phys.-Math.), V.I.Vernadsky Crimean Federal University, Simferopol, Republic of Crimea, 295007 Russia, e-mail: matemain@mail.ru
REFERENCES
1. Gasnikov A.V. Sovremennaya vypuklaya optimizatsiya. Method universalnogo gradientnogo spuska [Modern numerical optimization methods. The universal gradient descent method]. Moscow: MCCME Publ., 2021, 272 p. ISBN: 978-5-4439-1614-9 .
2. Nemirovsky A.S., Yudin D.B. Problem complexity and method efficiency in optimization. Chichester; NY: Wiley, 1983, 388 p. ISBN: 0471103454 . Original Russian text published in Nemirovsky A.S., Yudin D.B. Slozhnost’ zadach i effektivnost’ metodov optimizatsii, Moscow: Nauka Publ., 1979, 384 p.
3. Nesterov Yu. E. Algoritmicheskaya vypuklaya optimizatsiya [Algorithmic convex optimization]. Dr. Phys.-Math. Sci. Dissertation. Moscow: Mosk. Phys.-Techn. Inst. (State University), 2013. 367 p.
4. Nesterov Yu. Effectivnye metody v nelineinom programmirovanii [Effective methods in nonlinear programming]. Moscow: Radio i Svyaz’ Publ., 1989. ISBN: 5-256-00524-3 .
5. Nesterov Yu.E. Metody vypukloy optimizatsii [Methods of convex optimization]. Moscow: Publishing House MCNMO, 2010. ISBN: 978-5-94057-623-5 .
6. Polyak B.T. Minimization of nonsmooth functionals. U.S.S.R. Comput. Math. Math. Phys., 1969, vol. 9, no. 3, pp. 14–29. doi: 10.1016/0041-5553(69)90061-5
7. Gasnikov A.V., Turin A.I. Fast gradient descent for convex minimization problems with an oracle producing a $(\delta, L)$-model of a function in a requested point. Comput. Math. Math. Phys., 2019, vol. 59, no. 7, pp. 1085–1097. doi: 10.1134/S0965542519070078
8. Devolder O., Glineur F., Nesterov Yu. First-order methods of smooth convex optimization with inexact oracle. Math. Program., 2014, vol. 146, no. 1-2 (A), pp. 37–75. doi: 10.1007/s10107-013-0677-5
9. Hardt M., Ma T., Recht B. Gradient descent learns linear dynamical systems. J. Mach. Learn. Res., 2018, vol. 19, art. no. 29, 44 p.
10. Hinder O., Sidford A., Sohoni N.S. Near-optimal methods for minimizing star-convex functions and beyond. Proceedings of Machine Learning Research, 2020, vol. 125, pp. 1894–1938.
11. Hu Y., Li J., Yu C.K.W. Convergence rates of subgradient methods for quasi-convex optimization problems. 2019. 28 p. Available on: arXiv:1910.10879 [math.OC] .
12. Jiang R., Li X. Holderian error bounds and Kurdyka–Lojasiewicz inequality for the trust region subproblem. 2020. 30 p. Available on: arXiv:1911.11955 [math.OC] .
13. Johnstone P.R., Moulin P. Faster subgradient methods for functions with Holderian growth. 2018. 50 p. Available on: arXiv:1704.00196 [math.OC] .
14. Liu M., Yang T. Adaptive accelerated gradient converging methods under Holderian error bound condition. In: Proc. 31st Intern. Conf. on Neural Information Processing Systems (NIPS’17), 2017, pp. 3107–3117. ISBN: 9781510860964.
15. Nesterov Yu. Rounding of convex sets and efficient gradient methods for linear programming problems. Optim. Methods Softw., 2008, vol. 23, no. 1, pp. 109–128. doi: 10.1080/10556780701550059
16. Nesterov Yu. Unconstrained convex minimization in relative scale. Math. Oper. Res., 2009, vol. 34, no. 1, pp. 180–193. doi: 10.1287/moor.1080.0348
17. Nesterov Yu. Universal gradient methods for convex optimization problems. Math. Program., 2015, vol. 152, no. 1-2 (A), pp. 381–404. doi: 10.1007/s10107-014-0790-0
Cite this article as: F.S. Stonyakin, S.S. Ablaev, I.V. Baran. Adaptive gradient-type methods for optimization problems with relative error and sharp minimum, Trudy Instituta Matematiki i Mekhaniki UrO RAN, 2021, vol. 27, no. 4, pp. 175–188.