A. L. Bagno, A. M. Taras’ev. Discrete approximation of the Hamilton–Jacobi equation for the value function in an optimal control problem with infinite horizon ... P. 27-39

An infinite horizon optimal control problem is considered in which the quality functional contains an index with discount factor under the integral sign. The main feature of the problem is the unbounded index, which allows to analyze economic growth models with linear, power, and logarithmic utility functions. A discrete approximation of the Hamilton–Jacobi equation is explored for constructing the value function of the original problem. The Holder condition and the sublinear growth condition are derived for the solution of the discrete approximation equation. Uniform convergence of solutions of approximation equations to the value function of the optimal control problem is shown. The obtained results can be used to construct grid approximation methods for the value function of an optimal control problem on an infinite time interval. The proposed methods are effective tools in the modeling of economic growth processes.

Keywords: discrete approximation, optimal control, Hamilton–Jacobi equation, viscosity solution, infinite horizon, value function.

The paper was received by the Editorial Office on Dezember 1, 2017.

Aleksandr Leonidovich Bagno, doctoral student, Ural Federal University, Yekaterinburg, 620002 Russia, e-mail:bagno.alexander@gmail.com.

Aleksandr MIkhailovich Taras’ev, Dr. Phys.-Math. Sci., Prof., Krasovskii Institute of Mathematics and Mechanics, Ural Branch of the Russian Academy of Sciences, Yekaterinburg, 620990 Russia; Ural Federal University, Yekaterinburg, 620002 Russia, e-mail: tam@imm.uran.ru.


1.   Bertsekas D.P. Dynamic programming and optimal control. Vol. I. Belmont: Athena Scientific, 2017, 576 p. ISBN: 1-886529-26-4 .

2.   Crandall M.G., Lions P.-L. Viscosity solutions of Hamilton–Jacobi equations. Trans. Amer. Math. Soc., 1983, vol. 277, no. 1, pp. 1–42. doi: https://doi.org/10.1090/S0002-9947-1983-0690039-8 .

3.   Dolcetta I.C. On a discrete approximation of the Hamilton–Jacobi equation of dynamic programming. Appl. Math. Optimiz., 1983, vol. 10, no. 4, pp. 367–377. doi: https://doi.org/10.1007/BF01448394 .

4.   Dolcetta I.C., Ishii H. Approximate solution of the Bellman equation of deterministic control theory. Appl. Math. Optimiz., 1984, vol. 11, no. 2, pp. 161–181. doi: https://doi.org/10.1007/bf01442176 .

5.   Adiatulina R.A., Tarasyev A.M.. A differential game of unlimited duration. J. Appl. Math. Mech., 1987, vol. 51, no. 4, pp. 415–420. doi: https://doi.org/10.1016/0021-8928(87)90077-3 .

6.   Bagno A.L., Tarasyev A.M. Properties of the value function in optimal control problems with infinite horizon. Vestn. Udmurtsk. Univ. Mat. Mekh. Komp. Nauki, 2016, vol. 26, no. 1, pp. 3–14 (in Russian).
doi: https://doi.org/10.20537/vm160101 .

7.   Bagno A.L., Tarasyev A.M. Stability properties of the value function in an infinite horizon optimal control problem. Tr. Inst. Mat. Mekh. UrO RAN, 2017, vol. 23, no. 1, pp. 43–56. (in Russian).
doi: https://doi.org/10.21538/0134-4889-2017-23-1-43-56 .

8.    Krasovskii A.A., Tarasyev A.M. Dynamic optimization of investments in the economic growth models. Autom. Remote Control, 2007, vol. 68, no. 10, pp. 1765–1777. doi: https://doi.org/10.1134/S0005117907100050 .

9.   Magnus J.R., Katyshev P.K., Peresetsky A.A. Econometrika. Nachalnyj kurs. [Econometrics: A First Course]. Moscow, Delo Publ., 2004, 576 p. ISBN: 5-7749-0055-Х .

10.   Subbotin A.I. Minimaksnye neravenstva i uravneniya Gamil’tona – Yakobi. (Minimax Inequalities and Hamilton – Jacobi Equations). Moscow, Nauka Publ., 1991, 216 p.

11.   Subbotin A.I., Tarasyev A.M.. Conjugate derivatives of the value function of a differential game. Soviet Math. Dokl., 1985, no. 32, pp. 162–166.

12.   Sultanova R.A. Minimaksnye resheniya uravnenii v chastnykh proizvodnykh. [Minimax solutions of partial differential equations]. Can. Sci. (Phys.-Math.) Dissertation, Ekaterinburg, 1995, 192 p.