November  2019, 2(4): 315-331. doi: 10.3934/mfc.2019020

A Sim2real method based on DDQN for training a self-driving scale car

1. 

School of Information Science and Technology, North China University of Technology, Beijing 100144, China

2. 

State Key Laboratory of Turbulence and Complex Systems, College of Engineering, Peking University, Beijing 100871, China

* Corresponding author: Tao Du

Published  December 2019

The self-driving based on deep reinforcement learning, as the most important application of artificial intelligence, has become a popular topic. Most of the current self-driving methods focus on how to directly learn end-to-end self-driving control strategy from the raw sensory data. Essentially, this control strategy can be considered as a mapping between images and driving behavior, which usually faces a problem of low generalization ability. To improve the generalization ability for the driving behavior, the reinforcement learning method requires extrinsic reward from the real environment, which may damage the car. In order to obtain a good generalization ability in safety, a virtual simulation environment that can be constructed different driving scene is designed by Unity. A theoretical model is established and analyzed in the virtual simulation environment, and it is trained by double Deep Q-network. Then, the trained model is migrated to a scale car in real world. This process is also called a sim2real method. The sim2real training method efficiently handles these two problems. The simulations and experiments are carried out to evaluate the performance and effectiveness of the proposed algorithm. Finally, it is demonstrated that the scale car in real world obtains the capability for autonomous driving.

Citation: Qi Zhang, Tao Du, Changzheng Tian. A Sim2real method based on DDQN for training a self-driving scale car. Mathematical Foundations of Computing, 2019, 2 (4) : 315-331. doi: 10.3934/mfc.2019020
References:
[1]

H. Abraham, C. Lee, S. Brady, C. Fitzgerald, B. Mehler, B. Reimer and J. F. Coughlin, Autonomous vehicles, trust, and driving alternatives: a survey of consumer preferences, Massachusetts Inst. Technol, AgeLab, Cambridge, (2016), 1–16.

[2]

K. J. Aditya, Working model of self-driving car using Convolutional Neural Network, Raspberry Pi and Arduino, in 2018 Second International Conference on Electronics, Communication and Aerospace Technology, IEEE, 2018, 1630–1635.

[3]

P. Andhare and S. Rawat, Pick and place industrial robot controller with computer vision, in 2016 International Conference on Computing Communication Control and automation, 2016, 1–4. doi: 10.1109/ICCUBEA.2016.7860048.

[4]

C. Chen, A. Seff, A. Kornhauser and J. Xiao, Deepdriving: Learning affordance for direct perception in autonomous driving, in IEEE International Conference on Computer Vision, 2015, 2722–2730. doi: 10.1109/ICCV.2015.312.

[5]

Z. Chen and X. Huang, End-to-end learning for lane keeping of self-driving cars, in IEEE Intelligent Vehicles Symposium, IEEE, 2018, 1856–1860. doi: 10.1109/IVS.2017.7995975.

[6]

F. Codevilla, M. Miiller, A. Lopez, V. Koltun and A. Dosovitskiy, End-to-end driving via conditional imitation learning, in IEEE International Conference on Robotics and Automation, IEEE, 2018, 4693–4700. doi: 10.1109/ICRA.2018.8460487.

[7]

D. Dorr, D. Grabengiesser and F. Gauterin, Online driving style recognition using fuzzy logic, in 17th International IEEE Conference on Intelligent Transportation Systems, IEEE, 2014, 1021–1026. doi: 10.1109/ITSC.2014.6957822.

[8]

X. Liang, T. Wang, L. Yang and E. Xing, CIRL: controllable imitative reinforcement learning for vision-based self-driving, in Proceedings of the European Conference on Computer Vision, 2018, 604–620. doi: 10.1007/978-3-030-01234-2_36.

[9]

L. J. Lin, Reinforcement Learning for Robots Using Neural Networks, Ph.D thesis, Carnegie Mellon University in Pittsburgh, 1993.

[10]

R. R. Meganathan, A. A. Kasi and S. Jagannath, Computer vision based novel steering angle calculation for autonomous vehicles, in 2018 Second IEEE International Conference on Robotic Computing, 2018, 143–146.

[11]

https://github.com/naokishibuya/car-behavioral-cloning

[12]

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra and M. Riedmiller, Playing atari with deep reinforcement learning, preprint, arXiv: 1312.5602.

[13]

V. Mnih and et al., Human-level control through deep reinforcement learning, Nature, 518 (2015), 529-533.  doi: 10.1038/nature14236.

[14]

C. J. Pretorius, M. C. du Plessis and J. W. Gonsalves, The transferability of evolved hexapod locomotion controllers from simulation to real hardware, in 2017 IEEE International Conference on Real-time Computing and Robotics, 2017, 567–574. doi: 10.1109/RCAR.2017.8311923.

[15]

Understanding the Fatal Tesla Accident on Autopilot and the NHTSA Probe, Electrek, 2016. Available from: https://electrek.co/2016/07/01/understanding-fatal-tesla-accident-autopilot-nhtsa-probe/.

[16]

M. SadeghzadehD. Calvert and H. A. Abdullah, Self-learning visual servoing of robot manipulator using explanation-based fuzzy neural networks and Q-learning, Journal of Intelligent and Robotic Systems, 78 (2015), 83-104.  doi: 10.1007/s10846-014-0151-5.

[17] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, $2^nd$ edition, Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2018. 
[18]

https://docs.donkeycar.com/

[19]

https://github.com/autorope/donkeycar

[20]

H. Van, A Guez and D Silver, Deep reinforcement learning with double Q-Learning, in National Conference on Artificial Intelligence, 2016, 2094–2100.

[21]

D. WangJ. WenY. WangX. Huang and F. Pei, End-to-end self-driving using deep neural networks with multi-auxiliary tasks, Automotive Innovation, 2 (2019), 127-136.  doi: 10.1007/s42154-019-00057-1.

[22]

C. J. Watkins and P. Dayan, Q-learning, Machine Learning, 8 (1992), 279-292.  doi: 10.1007/BF00992698.

[23]

T. Yamawaki and M. Yashima, Application of adam to iterative learning for an in-hand manipulation task, ROMANSY 22 Robot Design, Dynamics and Control, 584 (2019), 272-279.  doi: 10.1007/978-3-319-78963-7_35.

show all references

References:
[1]

H. Abraham, C. Lee, S. Brady, C. Fitzgerald, B. Mehler, B. Reimer and J. F. Coughlin, Autonomous vehicles, trust, and driving alternatives: a survey of consumer preferences, Massachusetts Inst. Technol, AgeLab, Cambridge, (2016), 1–16.

[2]

K. J. Aditya, Working model of self-driving car using Convolutional Neural Network, Raspberry Pi and Arduino, in 2018 Second International Conference on Electronics, Communication and Aerospace Technology, IEEE, 2018, 1630–1635.

[3]

P. Andhare and S. Rawat, Pick and place industrial robot controller with computer vision, in 2016 International Conference on Computing Communication Control and automation, 2016, 1–4. doi: 10.1109/ICCUBEA.2016.7860048.

[4]

C. Chen, A. Seff, A. Kornhauser and J. Xiao, Deepdriving: Learning affordance for direct perception in autonomous driving, in IEEE International Conference on Computer Vision, 2015, 2722–2730. doi: 10.1109/ICCV.2015.312.

[5]

Z. Chen and X. Huang, End-to-end learning for lane keeping of self-driving cars, in IEEE Intelligent Vehicles Symposium, IEEE, 2018, 1856–1860. doi: 10.1109/IVS.2017.7995975.

[6]

F. Codevilla, M. Miiller, A. Lopez, V. Koltun and A. Dosovitskiy, End-to-end driving via conditional imitation learning, in IEEE International Conference on Robotics and Automation, IEEE, 2018, 4693–4700. doi: 10.1109/ICRA.2018.8460487.

[7]

D. Dorr, D. Grabengiesser and F. Gauterin, Online driving style recognition using fuzzy logic, in 17th International IEEE Conference on Intelligent Transportation Systems, IEEE, 2014, 1021–1026. doi: 10.1109/ITSC.2014.6957822.

[8]

X. Liang, T. Wang, L. Yang and E. Xing, CIRL: controllable imitative reinforcement learning for vision-based self-driving, in Proceedings of the European Conference on Computer Vision, 2018, 604–620. doi: 10.1007/978-3-030-01234-2_36.

[9]

L. J. Lin, Reinforcement Learning for Robots Using Neural Networks, Ph.D thesis, Carnegie Mellon University in Pittsburgh, 1993.

[10]

R. R. Meganathan, A. A. Kasi and S. Jagannath, Computer vision based novel steering angle calculation for autonomous vehicles, in 2018 Second IEEE International Conference on Robotic Computing, 2018, 143–146.

[11]

https://github.com/naokishibuya/car-behavioral-cloning

[12]

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra and M. Riedmiller, Playing atari with deep reinforcement learning, preprint, arXiv: 1312.5602.

[13]

V. Mnih and et al., Human-level control through deep reinforcement learning, Nature, 518 (2015), 529-533.  doi: 10.1038/nature14236.

[14]

C. J. Pretorius, M. C. du Plessis and J. W. Gonsalves, The transferability of evolved hexapod locomotion controllers from simulation to real hardware, in 2017 IEEE International Conference on Real-time Computing and Robotics, 2017, 567–574. doi: 10.1109/RCAR.2017.8311923.

[15]

Understanding the Fatal Tesla Accident on Autopilot and the NHTSA Probe, Electrek, 2016. Available from: https://electrek.co/2016/07/01/understanding-fatal-tesla-accident-autopilot-nhtsa-probe/.

[16]

M. SadeghzadehD. Calvert and H. A. Abdullah, Self-learning visual servoing of robot manipulator using explanation-based fuzzy neural networks and Q-learning, Journal of Intelligent and Robotic Systems, 78 (2015), 83-104.  doi: 10.1007/s10846-014-0151-5.

[17] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, $2^nd$ edition, Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2018. 
[18]

https://docs.donkeycar.com/

[19]

https://github.com/autorope/donkeycar

[20]

H. Van, A Guez and D Silver, Deep reinforcement learning with double Q-Learning, in National Conference on Artificial Intelligence, 2016, 2094–2100.

[21]

D. WangJ. WenY. WangX. Huang and F. Pei, End-to-end self-driving using deep neural networks with multi-auxiliary tasks, Automotive Innovation, 2 (2019), 127-136.  doi: 10.1007/s42154-019-00057-1.

[22]

C. J. Watkins and P. Dayan, Q-learning, Machine Learning, 8 (1992), 279-292.  doi: 10.1007/BF00992698.

[23]

T. Yamawaki and M. Yashima, Application of adam to iterative learning for an in-hand manipulation task, ROMANSY 22 Robot Design, Dynamics and Control, 584 (2019), 272-279.  doi: 10.1007/978-3-319-78963-7_35.

Figure 1.  The reinforcement learning scale car based on DDQN
Figure 2.  One 1:16 scale car. There is an opensource DIY self-driving platform for small scale cars called donkeycar [18]
Figure 3.  The process of reinforcement learning
Figure 4.  The architecture of the network
Figure 5.  The examples of raw images transfer to the segmented images
Figure 6.  The learning curve of "average rewards - train episodes"
Figure 7.  The scale vehicle car in the Unity Simulation
Figure 8.  The road for self-driving scale vehicle car, which contains two fast curves and two gentle curves
Figure 9.  The trained self-driving scale vehicle car
Figure 10.  An obstacle is added on the road. The angle of view of the car in the lower left of the figure
Figure 11.  There are five obstacles on the left figure. And there are three obstacles on the right figure
Table 1.  Performance of CNN, DDQN in the same road. The number means the times of the car outside
5(night) 3 0
10(daylight) 4 1
10(night) 8 2
15(daylight) 9 1
15(night) 12 3
5(night) 3 0
10(daylight) 4 1
10(night) 8 2
15(daylight) 9 1
15(night) 12 3
Table 2.  Performance of CNN, DDQN in the same road with obstacle(s) and in five laps. The number means the times of the car hitting the obstacle(s)
3 3 0
5 4 0
3 3 0
5 4 0
[1]

Ziju Shen, Yufei Wang, Dufan Wu, Xu Yang, Bin Dong. Learning to scan: A deep reinforcement learning approach for personalized scanning in CT imaging. Inverse Problems and Imaging, 2022, 16 (1) : 179-195. doi: 10.3934/ipi.2021045

[2]

Jingang Zhao, Chi Zhang. Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning. Journal of Industrial and Management Optimization, 2021, 17 (3) : 1471-1483. doi: 10.3934/jimo.2020030

[3]

Christopher Oballe, David Boothe, Piotr J. Franaszczuk, Vasileios Maroulas. ToFU: Topology functional units for deep learning. Foundations of Data Science, 2021  doi: 10.3934/fods.2021021

[4]

Richard Archibald, Feng Bao, Yanzhao Cao, He Zhang. A backward SDE method for uncertainty quantification in deep learning. Discrete and Continuous Dynamical Systems - S, 2022, 15 (10) : 2807-2835. doi: 10.3934/dcdss.2022062

[5]

Dieudonné Nijimbere, Songzheng Zhao, Xunhao Gu, Moses Olabhele Esangbedo, Nyiribakwe Dominique. Tabu search guided by reinforcement learning for the max-mean dispersion problem. Journal of Industrial and Management Optimization, 2021, 17 (6) : 3223-3246. doi: 10.3934/jimo.2020115

[6]

Jiaxin Zhang, Hoang Tran, Guannan Zhang. Accelerating reinforcement learning with a Directional-Gaussian-Smoothing evolution strategy. Electronic Research Archive, 2021, 29 (6) : 4119-4135. doi: 10.3934/era.2021075

[7]

Xiao Wang, Guowei Zhang, Yongqiang Li, Na Qu. A heuristically accelerated reinforcement learning method for maintenance policy of an assembly line. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022047

[8]

Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren, Carola-Bibiane Schönlieb. Deep learning as optimal control problems: Models and numerical methods. Journal of Computational Dynamics, 2019, 6 (2) : 171-198. doi: 10.3934/jcd.2019009

[9]

Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019

[10]

Miria Feng, Wenying Feng. Evaluation of parallel and sequential deep learning models for music subgenre classification. Mathematical Foundations of Computing, 2021, 4 (2) : 131-143. doi: 10.3934/mfc.2021008

[11]

Govinda Anantha Padmanabha, Nicholas Zabaras. A Bayesian multiscale deep learning framework for flows in random media. Foundations of Data Science, 2021, 3 (2) : 251-303. doi: 10.3934/fods.2021016

[12]

Suhua Wang, Zhiqiang Ma, Hongjie Ji, Tong Liu, Anqi Chen, Dawei Zhao. Personalized exercise recommendation method based on causal deep learning: Experiments and implications. STEM Education, 2022, 2 (2) : 157-172. doi: 10.3934/steme.2022011

[13]

Yuan Xu, Xin Jin, Saiwei Wang, Yang Tang. Optimal synchronization control of multiple euler-lagrange systems via event-triggered reinforcement learning. Discrete and Continuous Dynamical Systems - S, 2021, 14 (4) : 1495-1518. doi: 10.3934/dcdss.2020377

[14]

Huiyi Bao, Tao Du, Luyue Sun. Adaptive attitude determination of bionic polarization integrated navigation system based on reinforcement learning strategy. Mathematical Foundations of Computing, 2022  doi: 10.3934/mfc.2022014

[15]

Tieliang Gong, Qian Zhao, Deyu Meng, Zongben Xu. Why curriculum learning & self-paced learning work in big/noisy data: A theoretical perspective. Big Data & Information Analytics, 2016, 1 (1) : 111-127. doi: 10.3934/bdia.2016.1.111

[16]

Changming Song, Yun Wang. Nonlocal latent low rank sparse representation for single image super resolution via self-similarity learning. Inverse Problems and Imaging, 2021, 15 (6) : 1347-1362. doi: 10.3934/ipi.2021017

[17]

Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics and Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117

[18]

Yangyang Xu, Wotao Yin, Stanley Osher. Learning circulant sensing kernels. Inverse Problems and Imaging, 2014, 8 (3) : 901-923. doi: 10.3934/ipi.2014.8.901

[19]

Christian Soize, Roger Ghanem. Probabilistic learning on manifolds. Foundations of Data Science, 2020, 2 (3) : 279-307. doi: 10.3934/fods.2020013

[20]

Mauro Maggioni, James M. Murphy. Learning by active nonlinear diffusion. Foundations of Data Science, 2019, 1 (3) : 271-291. doi: 10.3934/fods.2019012

 Impact Factor: 

Metrics

  • PDF downloads (1066)
  • HTML views (1127)
  • Cited by (0)

Other articles
by authors

[Back to Top]