An ideal tool to re-create real-world environments as virtual environments, by utilising real-world imagery, would be effective and efficient. These virtual environments have applications in training. In this paper, a focus
is given on how to detect real-world objects from an unmanned aerial system’s sensors, and in turn, inject corresponding objects into a virtual environment. As a step towards this ideal tool, the You Only Look Once (YOLO)
object detection model (a type of machine learning algorithm) was trained on virtual models of poles (e.g., light poles), and in turn, tested on recognising poles. A precision-recall curve was used for performance results.
Final analysis suggests a large domain gap between the virtual models used and their real-world counterpart, due to the fidelity of the virtual models; our 3D-modelling technique is contrasted with other techniques from
previous literature. Further, this paper details a novel Unity Terrain Importer tool, as it applies towards a re-creation pipeline. The importer tool is oriented to reduce current fidelity and performance limitations in
the Unity game engine.