Using a virtual dataset for deep learning: improving real-world environment re-creation for human training

  • Troyle Thomas
  • Jonathan Hurter
  • Terrence Winston
  • Dean Reed 
  • Latika “Bonnie” Eifert
  • a,b,c,d Institute for Simulation and Training, 3100 Technology Pkwy, Orlando, FL 32708, USA
  • U.S. Army Futures Command, CCDC-SC, STTC, 12423 Research Pkwy, Orlando, FL 32826, USA
Cite as
Thomas T., Hurter J., Winston T., Reed D., Eifert L. (2020). Using a virtual dataset for deep learning: improving real-world environment re-creation for human training. Proceedings of the 19th International Conference on Modeling & Applied Simulation (MAS 2020), pp. 26-33. DOI: https://doi.org/10.46354/i3m.2020.mas.004

Abstract

An ideal tool to re-create real-world environments as virtual environments, by utilising real-world imagery, would be effective and efficient. These virtual environments have applications in training. In this paper, a focus is given on how to detect real-world objects from an unmanned aerial system’s sensors, and in turn, inject corresponding objects into a virtual environment. As a step towards this ideal tool, the You Only Look Once (YOLO) object detection model (a type of machine learning algorithm) was trained on virtual models of poles (e.g., light poles), and in turn, tested on recognising poles. A precision-recall curve was used for performance results. Final analysis suggests a large domain gap between the virtual models used and their real-world counterpart, due to the fidelity of the virtual models; our 3D-modelling technique is contrasted with other techniques from previous literature. Further, this paper details a novel Unity Terrain Importer tool, as it applies towards a re-creation pipeline. The importer tool is oriented to reduce current fidelity and performance limitations in the Unity game engine.

References

  1. Aldrich, C. (2009). The complete guide to simulations��and serious games: How the most valuable content
    will be created in the age beyond Gutenberg to Google. John Wiley & Sons.
  2. Harrington, M.C. (2010). Empirical evidence of priming, transfer, reinforcement, and learning in the real and virtual trillium trails. IEEE Transactions on Learning Technologies, 4(2), 175-186. doi:10.1109/TLT.2010.20
  3. Johnson-Roberson, M., Barto, C., Mehta, R., Sridhar, S.N., Rosaen, K., & Vasudevan, R. (2017). Driving in the matrix: Can virtual worlds replace humangenerated annotations for real world tasks? Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), 746-753. doi:10.1109/ICRA.2017.7989092
  4. Khan, S. (2019). Towards synthetic dataset generation for semantic segmentation networks (Unpublished master’s thesis). University of Waterloo, Canada. Retrieved from: https://uwspace.uwaterloo.ca/handle/10012/15128
  5. Reed, D., Thomas, T., Eifert, L., Reynolds, S., Hurter, J., & Tucker F. (2018). Leveraging virtual environments to train a deep learning algorithm. Proceedings of the 17th International Conference on Modeling and Applied Simulation (MAS 2018), 48-54. Retrieved from http://www.mscles.org/proceedings/mas/index.html
  6. Reed, D., Thomas, T., Reynolds, S., Hurter, J., & Eifert, L. (2019). Deep learning of virtual-based aerial images: Increasing the fidelity of serious games for live training. Proceedings of the International Defence and Homeland Security Simulation Workshop 2019, 1-9. Retrieved from:
    http://www.mscles.org/proceedings/dhss/2019/DHSS2019.pdf
  7. Ros, G., Sellart, L., Materzynska, J., Vazquez, D., & Lopez, A.M. (2016). The SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3234-3243. doi:10.1109/CVPR.2016.352
  8. Sermet, Y., & Demir, I. (2018). Flood Action VR: A virtual reality framework for disaster awareness and emergency response training. Proceedings of the 2018 International Conference on Modeling, Simulation and Visualisation Methods (MSV’18), 65-68. Retrieved from https://csce.ucmss.com/cr/books/2018/ConferenceReport?ConferenceKey=MSV
  9. Terrain Settings (n.d.). Retrieved July 1, 2020, from docs.unity3d.com/Manual/terrainOtherSettings.html
  10. Tian, F., & Lou, L. (2018). Dynamic scheduling of terrain based on Unity. Proceedings of the 2018 International Conference on Network, Communication, Computer Engineering (NCCE 2018), 1101-1104. https://doi.org/10.2991/ncce18.2018.186
  11. Tian, Y., Li, X., Wang, K., & Wang, F.Y. (2018). Training and testing object detectors with virtual images. IEEE/CAA Journal of Automatica Sinica, 5(2), 539-546. doi:10.1109/JAS.2017.7510841