Path Planning and Control for Omni-directional Mobile Robot using Q-Learning and CLIK Algorithm for Home Environment

  • Saurabh Sachan ,
  • P. M. Pathak
  • a,b Department of Mechanical and Industrial Engineering, Indian Institute of Technology Roorkee, Roorkee, 247667, India
Cite as
Sachan S., and P. M. Pathak (2022).,Path Planning and Control for Omni-directional Mobile Robot using Q-Learning and CLIK Algorithm for Home Environment. Proceedings of the 21st International Conference on Modelling and Applied Simulation MAS 2022). , 027 . DOI: https://doi.org/10.46354/i3m.2022.mas.027

Abstract

Localization, mapping, and planning are the three crucial steps to accomplishing the autonomous navigation of mobile robots in an unfamiliar environment. Since implementation of reinforcement learning (RL) algorithms for autonomous navigation in the case of omni-directional robots is a less explored research area, and also such robots have a unique feature over differential drive robots that they can also produce sideways movement. Therefore, in this paper, an RL algorithm called Q-learning is used to get the safe and shortest path from a start point (SP) to a goal point (GP) in a home environment. The path trajectories are obtained by using polynomial curve fitting. The closed-loop inverse kinematics (CLIK) algorithm is used to control a three-wheeled omni-directional mobile robot to follow the desired path. The simulation and plotting are done using MATLAB. The simulation results show that the suggested algorithm can effectively recognize and avoid static obstacles of different shapes and dimensions in an indoor home environment.

References

  1. Chen,W.,Zhou,S.,Pan,Z.,Zheng,H.,andLiu,Y.(2019a).Maplesscollaborativenavigationforamulti-robotsys-tembasedonthedeepreinforcementlearning.AppliedSciences,9(20):4198.
  2. Chen, Y., Dong, C., Palanisamy, P., Mudalige, P., Muelling, K., and Dolan, J. M. (2019b). Attention-based hierarchical deep reinforcement learning for lane change behaviors in autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0
  3. Cho, Y., Manzoor, S., and Choi, Y. (2019). Adaptation to environmental change using reinforcement learning for
    robotic salamander. Intelligent Service Robotics, 12:209–218.
  4. Duguleana, M. and Mogan, G. (2016). Neural networks based reinforcement learning for mobile robots obstacle avoidance. Expert Systems with Applications, 62:104–115.
  5. Harwin, L. and Supriya, P. (2019). Comparison of sarsa algorithm and temporal difference learning algorithm
    for robotic path planning for static obstacles. In 2019 Third International Conference on Inventive Systems and
    Control (ICISC), pages 472–476. IEEE.
  6. Jaradat, M. A. K., Al-Rousan, M., and Quadan, L. (2011). Reinforcement based mobile robot navigation in dynamic
    environment. Robotics and Computer-Integrated Manufacturing, 27:135–149.
  7. Low, E. S., Ong, P., and Cheah, K. C. (2019). Solving the optimal path planning of a mobile robot using improved
    q-learning. Robotics and Autonomous Systems, 115:143– 161.
  8. Manh, T. N., Manh, C. N., Tien, D. P., Van, M. T., Kim, D. H. T., and Duc, D. N. (2020). Autonomous navigation
    for omnidirectional robot based on deep reinforcement learning. International Journal of Mechanical Engineering
    and Robotics Research, 9(8):1134–1139.
  9. Nair, D. S. and Supriya, P. (2018). Comparison of temporal difference learning algorithm and dijkstra’s algorithm
    for robotic path planning. In 2018 Second International Conference on Intelligent Computing and Control Systems
    (ICICCS), pages 1619–1624. IEEE.
  10. Quang, H. D., Manh, T. N., Manh, C. N., Tien, D. P., Van, M. T., Kim, D. H. T., Van, N. T. T., and Duan, D. H. (2019).
    Mapping and navigation with four-wheeled omnidirectional mobile robot based on robot operating system. Proceedings of the 2019 International Conference on Mechatronics, Robotics and Systems Engineering, MoRSE
    2019, pages 54–59.
  11. Ram, R., Pathak, P. M., and Junco, S. J. (2019). Trajectory control of a mobile manipulator in the presence of base  disturbance. Simulation, 95(6):529–543.
  12. Rayankula, V. and Pathak, P. M. (2021). Fault tolerant control and reconfiguration of mobile manipulator. Journal
    of Intelligent & Robotic Systems, 101(2):1–18.
  13. Rodriguez-Ramos, A., Sampedro, C., Bavle, H., Moreno, I. G., and Campoy, P. (2018). A deep reinforcement learning strategy for uav autonomous landing on a moving platform alejandro. IEEE International Conference on Intelligent Robots and Systems, pages 1010–1017.
  14. Sichkar, V. N. (2019). Reinforcement learning algorithms in global path planning for mobile robot. In 2019International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM), pages 1–5. IEEE.
  15. Sutton, R. S. and Barto, A. G. (2018). Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA,
    USA.
  16. Syed, U. A., Kunwar, F., and Iqbal, M. (2014). Guided autowave pulse coupled neural network (gapcnn) based
    real time path planning and an obstacle avoidance scheme for mobile robots. Robotics and Autonomous
    Systems, 62:474–486.
  17. Wang, P., Li, X., Song, C., and Zhai, S. (2020). Research on dynamic path planning of wheeled robot based on deep
    reinforcement learning on the slope ground. Journal of Robotics, 2020.
  18. Wen, S., Zhao, Y., Yuan, X., Wang, Z., Zhang, D., and Manfredi, L. (2020). Path planning for active slam based on
    deep reinforcement learning under unknown environments. Intelligent Service Robotics, 13(2):263–272.
  19. Yin, S., Ji, W., and Wang, L. (2019). A machine learning based energy efficient trajectory planning approach for
    industrial robots. Procedia CIRP, 81:429–434.
  20. Yu, J., Su, Y., and Liao, Y. (2020). The path planning of mobile robot by neural networks and hierarchical reinforcement learning. Frontiers in Neurorobotics, page 63.