Automatic Modeling Method of Support Behavior for Work Support System Based on Variational Deep Embedding–Generative Adversarial Networks

  • Kohjiro Hashimoto 
  • Tadashi Miyosawa, 
  • Tetsuyasu Yamada
  • a,b,c Suwa University of Science, Japan, 5000-1 Toyohira, Chino, Nagano, 391-0292, Japan
Cite as
Hashimoto K., Miyosawa T., Yamada T. (2020). Automatic Modeling Method of Support Behavior for Work Support System Based on Variational Deep Embedding–Generative Adversarial Networks. Proceedings of the 19th International Conference on Modeling & Applied Simulation (MAS 2020), pp. 123-130. DOI: https://doi.org/10.46354/i3m.2020.mas.016

Abstract

In Japan, small and medium-sized enterprises often handle people-centered work that cannot be automated. However, However, the shortage of work trainers has become serious problem with the declining birthrate and aging population, and an education system for novice workers is needed. By the way, Head Mounted Display(HMD) can present instruction information on the wearer's field of view in real time. Therefore, it is expected that even novice worker will be able to perform complicated tasks by designing appropriate instruction information to be presented. However, it is difficult for designer to design the support behavior of the system, when there are many patterns of work should be supported. Therefore, it is necessary to establish a technique can generate automatically the support behavior of the system. In this paper, we propose a deep learning model for automatically generating the support behavior of the system. Here, the work of repairing a laptop computer is taken as a working example. Then, it is assumed that the HMD presents the next work location as the instruction information. The proposed model can automatically generates a work process model and instruction information should be presented.

References

  1. Patrice Humblot-Nino, Oscar Sandoval-Gonzalez, Ignacio Herrera-Aguilar, Daniel Rangel-Penuelas, (2017), Approach on a new methodology for skills transfer using a parallel planar robot with visuovibrotactile feedback. Proceedings of 14th International Conference in Electrical Engineering, Computing Science and Automatic Control, October 20-22, Mexico City, Mexico.
  2. Masao Sugi, Hisato Nakanishi, Masataka Nishino, Yusule Tamura, Tamio Arai, Jun Ota, (2010),
    Development of Deskeork Support System using Pointing Gesture Interface, Journal of Robotics and Mechatronics, Vol.22, No.4, pp.430-438.
  3. Kazuyoshi Tagawa, Hiromi Tanaka, Masayu Komori, Yoshimasa Kurumi, and Shigehiro Morikawa, (2012), A Visiohaptic Surgery Training System for Laparoscopical Techniques, Japanese Journal for Medical Virtual Reality, Vol.10, No.1, pp.11-18.
  4. Kohjiro Hashimoto, Tadashi Miyosawa, Mai Higuchi, (2018), DEVELOPMENT AND EVALUATION OF WORK SUPPORT SYSTEM BY AR USING HMD, Proceedings of the International Conference of the Virtual and Augmented Reality in Education Workshop, pp.28-33.
  5. Reiko Takizuka, Haruhisa Kato, Hiromasa Yanagihara, Masaru Sugano, (2016), Usefulness of operation support system by using AR technique, Technical report of Information Processing Society of Japan, pp.1-6, No.11.
  6. Koichiro Shiraishi, Kohei Matsuo, (2015), Piping installation support system using augmented reality, Transactions of the Japan society of mechanical engineers, Vol.81, No.825.
  7. Hirohiko Sagawa, Hiroto Nagayoshi, Harumi Kiyomizu, Tsuneya Kurihara, (2015), Hands-free
    AR Work Support System Monitoring Work Progress with Point-cloud Data Processing, Proceedings of IEEE International Symposium on Mixed and Augmented Reality, pp.172-173.
  8. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg, (2016), SSD:Single Shot MultiBox Detector, In: European Conference on Computer Vision, arXiv:1512.02325v5, pp.21-37, 2016.
  9. Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou, (2017), Variational Deep Embedding:An Unsupervised and Generative Approach to Clustering, proceedings of the 26th International Joint Conference on Artificial Intelligence, pp.1965-1972, 2017.
  10. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil, Aaron Courville, Yoshua Bengio, (2014), Generative Adversarial Nets, Proceedings of Neural
    Information Processing Systems, pp.2672-2680 ,2014.
  11. Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhrov Batra, (2016), Grad-CAM:Visual Explanations from Deep Networks via Gradient-based Localization, Proceedings of International Conference on Computer Vision, pp.618-626, 2016.
  12. Wang Haofan, Du Mengnan, Yang Gan, Zhang Zijian, (2019), Score-CAM:Improved Visual Explanations Via Score-Weighted Class Activation Mapping, arxiv.org/abs/1910.01279, 2019.
  13. Kunpeng Li, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Yun Fu, (2018), Tell Me Where to Look: Guided Attention Inference Network, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.9215-9223, 2018.
  14. Ashley Varghese, Jayavardhana Gubbi, Akshaya Ramaswamy, and P. Balamuralidhar, (2018),
    ChangeNet: A Deep Learning Architecture for Visual Change Detection, Proceedings of the
    European Conference on Computer Vision, pp.129-145, 2018.