[1] Bhasin, S., Kamalapurkar, R., Johnson, M., Vamvoudakis, K. G., Lewis, F. L., Dixon, W. E.:
A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems. Automatica 49 (2013), 82-92.
DOI |
MR 2999950
[2] Chen, B., Hu, J., Zhao, Y., Ghosh, B. K.:
Finite-time observer based tracking control of uncertain heterogeneous underwater vehicles using adaptive sliding mode approach. Neurocomputing 481 (2022), 322-332.
DOI
[3] Fu, X., Li, Z.:
Neural network optimal control for nonlinear system based on zero-sum differential game. Kybernetika 57 (2021), 546-566.
DOI |
MR 4299463
[4] Girard, A.:
Dynamic triggering mechanisms for event-triggered control. IEEE Trans. Autom. Control 60 (2015), 1992-1997.
DOI |
MR 3365092
[5] Hu, J., Chen, G., Li, H.-X.:
Distributed event-triggered tracking control of leader-follower multi-agent systems with communication delays. Kybernetika 47 (2011), 630-643.
MR 2884865 |
Zbl 1227.93008
[6] Hu, J., Geng, J., Zhu, H.:
An observer-based consensus tracking control and application to event-triggered tracking. Commun. Nonlinear Sci. Numer. Simul. 20 (2015), 559-570.
DOI |
MR 3251515 |
Zbl 1303.93012
[7] Jiang, Y., Jiang, Z. P.:
Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics. Automatica 48 (2012), 2699-2704.
DOI |
MR 2961173
[8] Khalil, H. K.:
Nonlinear Systems. Third Edition. Prentice-Hallm Upper Saddle River, NJ 2002.
Zbl 1194.93083
[9] Kiumarsi, B., Lewis, F. L.:
Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems. IEEE Trans. Neural Netw. Learn. Syst. 26 (2015), 140-151.
DOI |
MR 3449569
[11] Lewis, F., Jagannathan, S., Yesildirak, A.: Neural Network Control of Robot Manipulators and Nonlinear Systems. Taylor and Francis, London 1999.
[12] Lewis, F. L., Vrabie, D. L., Syrmos, V. L.:
Optimal Control. Third Edition. Wiley, New York 2012.
DOI |
MR 2953185
[13] Luo, R., Peng, Z., Hu, J., Bijoy, B. K.:
Adaptive optimal control of completely unknown systems with relaxed PE conditions. In: Proc. IEEE 11th Data Driven Control and Learning Systems Conference (DDCLS), Chengdu 2022, pp. 836-841.
DOI
[14] Lv, Y., Na, J., Yang, Q., Wu, X., Guo, Y.:
Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics. Int. J. Control 89 (2016), 99-112.
DOI |
MR 3433390
[15] Luo, R., Peng, Z., Hu, J.:
On model identification based optimal control and it's applications to multi-agent learning and control. Mathematics 11 (2023), 906.
DOI
[16] Makumi, W., Greene, M. L., Bell, Z., Bialy, B., Kamalapurkar, R., Dixon, W.:
Hierarchical reinforcement learning and gains cheduling-based control of a hypersonic vehicle. AIAA SCITECH 2023 Forum,National Harbor, MD and Online, 2023, 1-11.
DOI
[17] Ouyang, Y., Dong, L., Sun, C.:
Critic learning-based control for robotic manipulators with prescribed constraints. IEEE Trans. Cybern. 52 (2022), 2274-2283.
DOI
[18] Peng, Z., Luo, R., Hu, J., Shi, K., Ghosh, B. K.:
Distributed optimal tracking control of discrete-time multiagent systems via event-triggered reinforcement learning. IEEE Trans. Circuits Syst. I-Regul. Pap. 69 (2022), 3689-3700.
DOI
[19] Peng, Z., Luo, R., Hu, J., Shi, K., Nguang, S. K., Ghosh, B. K.:
Optimal tracking control of nonlinear multiagent systems using internal reinforce Q-learning. IEEE Trans. Neural Netw. Learn. Syst. 33 (2022), 4043-4055.
DOI |
MR 4468295
[20] Peng, Z., Zhao, Y., Hu, J., Luo, R., Ghosh, B. K., Nguang, S. K.:
Input-output data-based output antisynchronization control of multiagent systems using reinforcement learning approach. IEEE Trans. Ind. Inform. 17 (2021), 7359-7367.
DOI
[21] Shen, M., Wang, X., Park, J. H., Yi, Y., Che, W.-W.:
Extended disturbance-observer-based data-driven control of networked nonlinear systems with event-triggered output. IEEE Trans. Syst. Man Cybern. Syst. to be published.
DOI
[22] Song, R., Lewis, F., Wei, Q., Zhang, H. G., Jiang, Z. P., Levine, D.:
Multiple actor-critic structures for continuous-time optimal control using input-output data. IEEE Trans. Neural Netw. Learn. Syst. 26 (2015), 851-865.
DOI |
MR 3452493
[23] Tabuada, P.:
Event-triggered real-time scheduling of stabilizing control tasks. IEEE Trans. Autom. Control 52 (2007), 1680-1685.
DOI |
MR 2352444
[24] Wang, K., Mu, C.:
Event-sampled learning for unknown nonlinear systems related to dynamic triggering method. In: Proc. IEEE Conference on Decision and Control (CDC), Jeju 2020, pp. 5200-5205.
DOI
[25] Wang, D., Mu, C., Liu, D.:
Adaptive critic designs for solving event-based $H_\infty$ control problems. In: Proc. American Control Conference (ACC), Seattle 2017, pp. 2435-2400.
DOI
[26] Wang, X., Qin, W., Park, J. H., Shen, M.:
Event-triggered data-driven control of discrete-time nonlinear systems with unknown disturbance. ISA Trans. 128 (2022), 256-264.
DOI
[27] Werbos, P. J.: Approximate dynamic programming for real-time control and neural modeling. In: Handbook of Intelligent Control: Neural, Fuzzy, and Adaptive Approaches (D. A. White and D. A. Sofge, Eds.), Van Nostrand Reinhold, New York 1992, ch. 13.
[28] Xu, N., Niu, B., Wang, H., Huo, X., Zhao, X.:
Single-network ADP for solving optimal event-triggered tracking control problem of completely unknown nonlinear systems. Int. J. Intell. Syst. 36 (2021), 4795-4815.
DOI
[29] Xue, S., Luo, B., Liu, D., Gao, Y.:
Adaptive dynamic programming-based event-triggered optimal tracking control. Int. J. Robust Nonlinear Control 31 (2021), 7480-7497.
DOI |
MR 4335306
[30] Yang, X., He, H.:
Adaptive critic designs for event-triggered robust control of nonlinear systems with unknown dynamics. IEEE Trans. Cybern. 49 (2019), 2255-2267.
DOI
[31] Yang, X., He, H., Liu, D.:
Event-triggered optimal neuro-controller design with reinforcement learning for unknown nonlinear systems. IEEE Trans. Syst. Man Cybern. Syst. 49 (2019), 1866-1878.
DOI