This paper investigates finite-horizon optimal control problem of completely unknown discrete-time linear systems. The completely unknown here refers to that the system dynamics are unknown. Compared with infinite-horizon optimal control, the Riccati equation (RE) of finite-horizon optimal control is time-dependent and must meet certain terminal boundary constraints, which brings the greater challenges. Meanwhile, the completely unknown system dynamics have also caused additional challenges. The main innovation of this paper is the developed cyclic fixed-finite-horizon-based Q-learning algorithm to approximate the optimal control input without requiring the system dynamics. The developed algorithm main consists of two phases: the data collection phase over a fixed-finite-horizon and the parameters update phase. A least-squares method is used to correlate the two phases to obtain the optimal parameters by cyclic. Finally, simulation results are given to verify the effectiveness of the proposed cyclic fixed-finite-horizon-based Q-learning algorithm.
Citation: |
[1] |
A. Al-Tamimi, F. L. Lewis and M. Abu-Khalaf, Model-free Q-learning design for discrete-time zero-sum games with application to H-infinity control, Automatica, 43 (2007), 473-481.
doi: 10.1016/j.automatica.2006.09.019.![]() ![]() |
[2] |
D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, 1$^st$ edition, Athena Scientific, Belmont, 1996.
![]() |
[3] |
D. P. Bertsekas, Value and policy iterations in optimal control and adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 500-509.
doi: 10.1109/TNNLS.2015.2503980.![]() ![]() |
[4] |
X. S. Chen, X. Li and F. H. Yi, Optimal stopping investment with non-smooth utility over an infinite time horizon, Journal of Industrial & Management Optimization, 15 (2019), 81-96.
![]() |
[5] |
Y. F. Chen and Y. G. Zhu, Indifinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems, Journal of Industrial & Management Optimization, 14 (2018), 913-930.
![]() |
[6] |
T. Cheng, F. L. Lewis and M. Abu-Khalaf, A neural network solution for fixed-final time optimal control of nonlinear systems, Automatica, 43 (2007), 482-490.
doi: 10.1016/j.automatica.2006.09.021.![]() ![]() |
[7] |
M. G. Gan, J. G. Zhao and C. Zhang, Extended adaptive optimal control of linear systems with unknown dynamics using adaptive dynamic programming, Asian J Control, (2019), 1–10.
doi: 10.1002/asjc.2243.![]() ![]() |
[8] |
W. N. Gao, Y. Jiang, Z. P. Jiang and Y. T. Chai, Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming, Automatica, 72 (2016), 37-45.
doi: 10.1016/j.automatica.2016.05.008.![]() ![]() |
[9] |
A. Heydari and S. N. Balakrishnan, Finite-horizon control-constrained nonlinear optimal control using single network adaptvie critics, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 145-157.
![]() |
[10] |
Y. Jiang and Z. P. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, 48 (2012), 2699-2704.
doi: 10.1016/j.automatica.2012.06.096.![]() ![]() |
[11] |
Y. Jiang and Z. P. Jiang, Global adaptive dynamic programming for continuous-time nonlinear systems, IEE Trans on Automatic Control, 60 (2015), 2917-2929.
doi: 10.1109/TAC.2015.2414811.![]() ![]() |
[12] |
W. K. Jong, J. P. Byung, Y. Haeun, H. L. Jay and M. L. Jong, Deep reinforcement learning based finite-horizon optimal tracking control for nonlinear systems, in International Federation Automatic Control, (2018), 257–262.
![]() |
[13] |
R. Kamalapurkar, P. Walters and W. E. Dixon, Model-based reinforcement learning for approximate optimal control, in Reinforcement Learning for Optimal Feedback Control, Communications and Control Engineering. Springer, Cham, (2018), 99–148.
![]() |
[14] |
J. Y. Lee, B. P. Jin and Y. H. Chio, Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems, Automatica, 48 (2012), 2850-2859.
doi: 10.1016/j.automatica.2012.06.008.![]() ![]() |
[15] |
F. L. Lewis and D. R. Liu, Reinforcement learning and approximate dynamic programming for feedback control, IEEE Circuits and Systems Magazine, 9 (2015), 32-50.
![]() |
[16] |
F. L. Levis, D. L. Vrabie and V. L. Syrmos, Optimal Control, 3$^th$ edition, John Wiley and Sons, Hoboken, 2015.
![]() |
[17] |
J. N. Li, T. Y. Chai, F. L. Lewis, Z. T. Ding and Y. Jiang, Off-policy interleaved Q-learning: optimal control for affine nonlinear discrete-time systems, IEEE Transactions on Neural Networks and Learning Systems, 30 (2019), 1308-1320.
doi: 10.1109/TNNLS.2018.2861945.![]() ![]() |
[18] |
J. N. Li, T. Y. Chai, F. L. Lewis, J. L. Fan, Z. T. Ding and J. L. Ding, Off-policy Q-learning: Set-point design for optimizing dual-rate rougher flotation operational processes, IEEE Transactions on Industrial Electronics, 65 (2018), 4092-4102.
doi: 10.1109/TIE.2017.2760245.![]() ![]() |
[19] |
X. X. Li, Z. H. Peng, L. Liang and W. Z. Zha, Policy iteration based Q-learning for linear nonzero-sum quadratic differential games, Science China(Information Sciences), 62 (2019), 195-213.
![]() |
[20] |
Q. Lin, Q. L. Wei and D. R. Liu, A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm, International Journal of Systems Science, 48 (2017), 525-534.
doi: 10.1080/00207721.2016.1188177.![]() ![]() |
[21] |
H. L. Liu and Q. X. Zhu, New forms of Riccati equations and the further results of the optimal control for linear discrete-time systems, International Journal of Control, Automation, and Systems, 12 (2014), 1160-1166.
doi: 10.1007/s12555-013-0202-x.![]() ![]() |
[22] |
B. Luo, D. R. Liu, T. W. Huang and D. Wang, Data-based approximate policy iteration for nonlinear continuous-time optimal control design, Automatica, 50 (2014), 3281-3290.
doi: 10.1016/j.automatica.2014.10.056.![]() ![]() |
[23] |
B. Luo, D. R. Liu, T. W. Huang and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 2134-2144.
doi: 10.1109/TNNLS.2016.2585520.![]() ![]() |
[24] |
B. Luo, D. R. Liu and H. N. Wu, Adaptive constrained optimal control design for data-based nonlinear discrete-time systems with critic-only structure, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 2099-2111.
doi: 10.1109/TNNLS.2017.2751018.![]() ![]() |
[25] |
B. Luo, D. R. Liu, H. N. Wu, D. Wang and F. L. Lewis, Policy gradient adaptive dynamic programming for data-based optimal control, IEEE Transactions on Cybernetics, 47 (2017), 3341-3354.
doi: 10.1109/TCYB.2016.2623859.![]() ![]() |
[26] |
B. Luo, H. N. Wu and T. W. Huang, Optimal output regulation for model-free quanser helicopter with multistep Q-learning, IEEE Transactions on Industrial Electronics, 65 (2018), 4953-4961.
doi: 10.1109/TIE.2017.2772162.![]() ![]() |
[27] |
B. Luo, Y. Yang and D. R. Liu, Adaptive Q-Learning for data-based optimal output regulation with experience replay, IEEE Transactions on Cybernetics, 48 (2018), 3337-3348.
![]() |
[28] |
Y. F. Lv, N. Jing, Q. M. Yang, X. Wu and Y. Guo, Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics, International Journal of Control, 89 (2016), 99-112.
doi: 10.1080/00207179.2015.1060362.![]() ![]() |
[29] |
W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2$^nd$ edition, Wiley Princeton, New Jersey, 2011.
![]() |
[30] |
A. Sahoo, H. Xu and S. Jagannathan, Approximate optimal control of affine nonlinear continuous-time systems using event-sampled neurodynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 639-652.
![]() |
[31] |
K. G. Vamvoudakis, Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach, Systems and Control Letters, 100 (2017), 14-20.
![]() |
[32] |
K. G. Vamvoudakis and F. L. Lewis, Online actor-critc algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, 46 (2010), 878-888.
![]() |
[33] |
D. Wang, D. R. Liu and Q. L. Wei, Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach, Neurocomputing, 78 (2012), 14-22.
![]() |
[34] |
F. Y. Wang, N. Jin, D. R. Liu and Q. L. Wei, Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with $\varepsilon -$ error bound, IEEE Transactions on Neural Networks, 22 (2011), 24-36.
![]() |
[35] |
Q. L. Wei and D. R. Liu, A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems, Science China(Information Sciences), 58 (2015), 1-15.
![]() |
[36] |
Q. L. Wei, D. R. Liu and H. Q. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEE Trans Cybern, 46 (2016), 840-853.
![]() |
[37] |
Y. Wu, Z. H. Yuan and Y. P. Wu, Optimal tracking control for networked control systems with random time delays and packet dropouts, Journal of Industrial & Management Optimization, 11 (2015), 1343-1354.
![]() |
[38] |
H. G. Zhang, J. He, Y. H. Luo and G. Y. Xiao, Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method, IEEE Transactions on Industrial Electronics, 64 (2017), 4091-4100.
![]() |
[39] |
H. G. Zhang, Q. L. Wei and Y. H. Luo, A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm, IEEE Transactions on Systems Man and Cybernetics Part B, 38 (2008), 937-942.
![]() |
[40] |
Q. C. Zhang, D. B. Zhao and D. Wang, Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 37-50.
![]() |
[41] |
J. G. Zhao, M. G. Gan and C. Zhang, Event-triggered ${{H}_{\infty }}$ optimal control for continuous-time nonlinear systems using neurodynamic programming, Neurocomputing, 360 (2019), 14-24.
![]() |
[42] |
Q. M. Zhao, Finite-horizon Optimal Control of Linear and a Class of Nonlinear Systems, Ph.D thesis, Missouri University of Science and Technology. 2013.
![]() |
[43] |
Q. M. Zhao, X. Hao and J. Sarangapani, Finite-horizon near optimal adaptive control of uncertain linear discrete-time systems, Optimal Control Applications and Methods, 36 (2016), 853-872.
![]() |
[44] |
X. N. Zhong, H. B. He, D. Wang and Z. Ni, Model-free adaptive optimal control for unknown nonlinear zero-sum differnetial game, IEEE Transactions on Cybernetics, 48 (2018), 1633-1646.
![]() |
[45] |
Q. X. Zhu and G. M. Xie, Finite-horizon optimal control of discrete-time switched linear systems, Mathematical Problems in Engineering, 2012 (2012), 1-12.
![]() |
The flow chart of Algorithm 1
Initial system state
The convergence process of
The trajectories of system states
The optimal control input
The convergence process of
The trajectories of system states
The optimal control input