Acoustic AR-TA agent using footsteps in corresponding to audience members' participating attitudes

  • Yuki Kitagishi  
  • bTomoko Yonezawa  
  • aKansai University, Graduate School of Informatics
  • bKansai University, Faculty of Informatics
Cite as
Y. Kitagishi, T. Yonezawa (2018). Acoustic AR-TA agent using footsteps in corresponding to audience members'
participating attitudes. Proceedings of the 4th International Conference of The Virtual And Augmented Reality In Education (VARE 2018), pp. 113-122. DOI: https://doi.org/10.46354/i3m.2018.vare.018

Abstract

We propose an acoustic AR-TA agent (AATA) that reacts to the audience members' participating attitudes in a
lecture. In one-to-many communications such as lectures with over 100 audiences, some do not focus on the lecture. In such cases, it is difficult to make them pay attention to the lecture. To improve such audience members' attitudes, we propose an AATA expressed by moving localized sounds of footsteps using a direction-controllable parametric speaker. Based on the hypothesis that the footsteps approaching to an audience member will indirectly notice the audience her/his problematic attitudes, we conducted two experiments. From the results, the participants felt as through someone walked around them when they perceived the movement of footsteps, as if the walk of the lecturer or TA was the source of the footsteps. Accordingly, we discuss the possibility of the AATA's movement to warn to the audiences.

References

  1. FaceBase, 2015, 3D Facial Norms Summary Statistics, https://www.facebase.org/facial_norms/summary/#maxmaxcranwi, Accessed 13 July 2018.
  2. You Z.J., Shen C.Y., Chang C.W., Liu B.J. and Chen G.D., 2006, A Robot as a Teaching Assistant in an English Class, Proceedings of sixth IEEE International Conference on Advanced Learning Technologies, pp. 87-91, July 5-7, Kerkrade (Netherlands).
  3. Sasaki N., 1995, ANIMAL DOCTOR, Tokyo (Japan), Hakusensha.
  4. Meghdari A., Alemi M., Ghazisaedy M., Taheri A., Karimian A. and Zandvalili M., 2013, Applying Robots as Teaching Assistant in EFL Classes at Iranian Middle-Schools, Proceedings of the 2013 International Conference on Education and Modern Educational Technologies, pp. 67-73, September 28-30, Venice (Italy).
  5. Koning B.B.de, Tabbers H.K., Rikers R.M.J.P. and Pass F., 2010, Attention guidance in learning from a complex animation: Seeing is understanding?, Learning and Instruction, 20 (2), pp. 111-122.
  6. Lossius T., Baltazar P. and Hogue T., 2011, DBAP-distance-based amplitude panning, Proceedings of the
    International Computer Music Conference, pp. 489-492, August 16-21, Montreal (Canada).
  7. Sun Z., Li Z. and Nishimori T., 2017, Development and Assessment of Robot Teaching Assistant in Facilitating Learning, Proceedings of 2017 International Conference of Educational Innovation through Technology, pp. 165-169. December 7-9, Osaka (Japan).
  8. Ueno K., Yoshida N. and Yonezawa T., 2017, Enhancing pointing gestures using an automatic projection
    system, Proceedings the 6th Asian Conference of Information Systems, pp.161-164, December 12-14,Phnom Penh (Cambodia).
  9. Japan Real Estate Fair Trade Council on Federation, 2015, Fair competition agreement on real estate display, http://www.rftc.jp/kiyak/hyouji_sekou.html,
    Accessed 13 July 2018.
  10. Roser M., 2018, Human Height, Published online at OurWorldInData.org., https://ourworldindata.org/human-height, Accessed 13 July 2018.
  11. Yonezawa T., Ino Y. and Kitagishi Y., 2016, Integrating auditory space for multiple people in real world using their personal devices, Proceedings of the 3rd International Conference on Universal Village, October 6-8, Nagoya (Japan).
  12. Ogawa K. and Ono T., 2008, ITACO: Constructing an emotional relationship between human and robot, Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, pp 35-40, August 1-3, Munich (Germany).
  13. Kitagishi Y. and Yonezawa T., 2017, Lecture support system for understanding an audience's attitudes using optical flow and overlapped color mapping, Proceedings of the 6th Asian Conference of Information Systems, pages 145–148, December 12-14, Phnom Penh (Cambodia).
  14. Ministry of Internal Affairs Statistics Bureau and Japan Communications, 2015, 21-2: Mean value of height and weight, http://www.stat.go.jp/data/nihon/back15/zuhyou/n152100200.xls, Accessed 13 July 2018.
  15. Westervelt P.J., 1963, Parametric Acoustic Array, The journal of the Acoustical Society of America, 35 (4), pp. 535–537, 1963.
  16. Hagiwara A., Sugimoto A. and Kawamoto K., 2011, Saliency-based image editing for guiding visual attention, Proceedings of the 1st International Workshop on Pervasive Eye Tracking: Mobile Eye-based Interaction, pp. 43-48, September 18, Beijing (China).
  17. Visell Y., Fontana F., Giordano B.L., Nordahl R., Serafin S. and Bresin R., 2009, Sound design and perception in walking interactions, International Journal of Human-Computer Studies, 67 (11), pp. 947–959.
  18. Shi C., Nomura H., Kamakura T. and Gan W.S., 2014, Spatial Aliasing Effects in a Steerable Parametric Loudspeaker for Stereophonic Sound Reproduction, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E97-
    A (9) pp. 1859–1866.
  19. Science Buddies, 2013, Stepping Science: Estimating Someone's height from Their Walk, SCIENTFIC AMERICAN, https://www.scientificamerican.com/article/bring-science-home-estimatingheight-walk/, Accessed 13 July 2018.
  20. Maynes-Aminzade D., Pausch R. and Seitz S., 2004, Techniques for interactive audience participation, Proceedings of the fourth IEEE International Conference on Multimodal Interfaces, pp. 15–20, October 14-16, Washington, DC (USA).
  21. Sugibayashi Y., Kurimoto S., Ikefuji D., Morise M. and Nishiura T., 2012, Three-dimensional acoustic sound field reproduction based on hybrid combination of multiple parametric loudspeakers and electrodynamic sub-woofer, Applied Acoustics, 73 (12), pp. 1282–1288.
  22. Goel A., Creeden B., Kumble M., Salunke S., Shetty A. and Wiltgen B., 2015, Using Watson for enhancing human-computer co-creativity, pp. 22-29, November 12-14, Arlington (USA). 
  23. Mohler B.J., Thompsho W.B., Creem-Regehr S.H., Pick H.L. and Warren W.H., 2007, Visual flow influences gait transition speed and preferred walking speed, Experimental brain Research, 181 (2), pp. 221-228.
  24. Hata H., Koike H. and Saito Y., 2016, Visual guidance with unnoticed blur effect, Proceedings of the International Working Conference on Advanced Visual Inter-faces, pp. 28-35, June 7-10, Bari (Italy).
  25. Giordano B. and Bresin R., 2006, Walking and playing: What's the origin of emotional expressiveness in music? Proceedings of the 9th International Conference on Music Perception and Cognition, pp. 436,
    August 22-26, Bologna (Italy).