A Review of Multimodal Interaction Technique in Augmented Reality Environment

Siti Soleha Muhammad Nizam, Rimaniza Zainal Abidin, Nurhazarifah Che Hashim, Meng Chun Lam, Haslina Arshad, Nazatul Aini Abd Majid

Abstract


Augmented Reality (AR) has proposed several types of interaction techniques such as 3D interactions, natural interactions, tangible interactions, spatial awareness interactions and multimodal interactions. Usually, interaction technique in AR involve unimodal interaction technique that only allows user to interact with AR content by using one modality such as gesture, speech, click, etc. Meanwhile, the combination of more than one modality is called multimodal. Multimodal can contribute to human and computer interaction more efficient and will enhance better user experience. This is because, there are a lot of issues have been found when user use unimodal interaction technique in AR environment such as fat fingers. Recent research has shown that multimodal interface (MMI) has been explored in AR environment and has been applied in various domain. This paper presents an empirical study of some of the key aspects and issues in multimodal interaction augmented reality, touching on the interaction technique and system framework. We reviewed the question of what are the interaction techniques that have been used to perform a multimodal interaction in AR environment and what are the integrated components applied in multimodal interaction AR frameworks. These two questions were used to be analysed in order to find the trends in multimodal field as a main contribution of this paper. We found that gesture, speech and touch are frequently used to manipulate virtual object. Most of the integrated component in MMI AR framework discussed only on the concept of the framework components or the information centred design between the components. Finally, we conclude this paper by providing ideas for future work involving this field.


Keywords


review; multimodal interaction technique; augmented reality

Full Text:

PDF

References


P. Milgram, F. K.-I. T. on I. and, and undefined 1994, “A taxonomy of mixed reality visual displays,†Search.Ieice.Org, no. 12, pp. 1–15, 2003.

R. T. Azuma, “A Survey of Augmented Reality,†pp. 355–385, 1997.

L. Ahmed, S. Hamdy, D. Hegazy, and T. El-Arif, “Interaction Techniques in Mobile Augmented Reality: State-of-the-art,†Int. Conf. Intell. Comput. Inf. Syst., pp. 424–433, 2015.

K. Saroha, S. Sharma, and G. Bhatia, “Human Computer Interaction: An intellectual approach,†IJCSMS Int. J. Comput. Sci. Manag. Stud., vol. 11, no. 2, pp. 147–154, 2011.

[5] P. Adkar, “Unimodal and Multimodal Human Computer Interaction:A Modern Overview,†Pratibha -International J. Comput. Sci. Inf. Engg, vol. 2, pp. 2277–4408.

R. S. Kraleva, “ChilDiBu – A Mobile Application for Bulgarian Children with Special Educational Needs,†Int. J. Adv. Sci. Eng. Inf. Technol., vol. 7, no. 6, pp. 2085–2091, 2017.

N. C. Hashim, N. Aini, A. Majid, H. Arshad, and W. K. Obeidy, “User satisfaction for an Augmented Reality application to support Productive Vocabulary using Speech Recognition.â€

N. H. Hussain, T. S. M. Tengku Wook, S. F. Mat Noor, and H. Mohamed, “Children’s interaction ability towards multi-touch gestures,†Int. J. Adv. Sci. Eng. Inf. Technol., vol. 6, no. 6, pp. 875–881, 2016.

S. Irawati, S. Green, M. Billinghurst, A. Duenser, and H. Ko, “An evaluation of an augmented reality multimodal interface using speech and paddle gestures,†Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 4282 LNCS, pp. 272–283, 2006.

L. M. Chun, H. Arshad, T. Piumsomboon, and M. Billinghurst, “A combination of static and stroke gesture with speech for multimodal interaction in a virtual environment,†Proc. - 5th Int. Conf. Electr. Eng. Informatics Bridg. Knowl. between Acad. Ind. Community, ICEEI 2015, pp. 59–64, 2015.

[11] S. S. Muhammad Nizam, M. C. Lam, H. Arshad, and N. A. Suwadi, “A Scoping Review on Tangible and Spatial Awareness Interaction Technique in Mobile Augmented Reality-Authoring Tool in Kitchen,†vol. 2018, 2018.

S. Jamali, M. F. Shiratuddin, and K. Wong, “An Overview of mobile-Augmented Reality in Higher Education,†Int. J. Recent Trends Eng. Technol., vol. 11, no. 1, 2014.

A. W. Ismail, M. Billinghurst, and M. S. Sunar, “Vision-Based Technique and Issues for Multimodal Interaction in Augmented Reality,†Proc. 8th Int. Symp. Vis. Inf. Commun. Interact., pp. 75–82, 2015.

S. Feuerstack, Ã. C. M. de Oliveira, M. dos Santos Anjo, R. B. Araujo, and E. B. Pizzolato, “Model-based design of multimodal interaction for augmented reality web applications,†Proc. 20th Int. Conf. 3D Web Technol. - Web3D ’15, pp. 259–267, 2015.

A. W. Ismail and M. S. Sunar, “Multimodal Fusion: Gesture and Speech Input in Augmented Reality Environment,†vol. 331, pp. 245–254, 2015.

D. Cordeiro, R. Jesus, and N. Correia, “ARZombie: A Mobile Augmented Reality Game with Multimodal Interaction,†Proc. 7th Int. Conf. Intell. Technol. Interact. Entertain., vol. 17, 2015.

M. Al-Sada, F. Ishizawa, J. Tsurukawa, and T. Nakajima, “Input forager: A user-driven interaction adaptation approach for head worn displays,†ACM Int. Conf. Proceeding Ser., pp. 115–122, 2016.

L. Rothrock, R. Koubek, F. Fuchs, M. Haas, and G. Salvendy, “Review and reappraisal of adaptive interfaces: Toward biologically inspired paradigms,†Theor. Issues Ergon. Sci., vol. 3, no. 1, pp. 47–84, 2002.

A. Sankar and S. M. Seitz, “In Situ CAD Capture,†Proc. 18th Int. Conf. Human-Computer Interact. with Mob. Devices Serv., pp. 233–243, 2016.

L. Fischbach, Martin; Dennis, Wiebush; Marc Erich, “Semantics-based Software Techniques for Maintainable Multimodal Input Processing in Real-time Interactive Systems,†pp. 623–627, 2016.

R. Z. Abidin, H. Arshad, and S. A. A. Shukri, “A framework of adaptive multimodal input for location-based augmented reality application,†J. Telecommun. Electron. Comput. Eng., vol. 9, no. 2–11, pp. 97–103, 2017.

Z. Chen, J. Li, and Y. Hua, “Multimodal Interaction in Augmented Reality,†pp. 206–209, 2017.

E. Sita and M. Studley, “Towards Multimodal Interactions : Robot Jogging in Mixed Reality,†pp. 2–3, 2017.

G. Jacucci et al., “Combining intelligent recommendation and mixed reality in itineraries for urban exploration,†Int. Ser. Inf. Syst. Manag. Creat. eMedia, vol. 2017, no. 2, pp. 18–23, 2017.

C. Zimmer, M. Bertram, F. Büntig, D. Drochtert, and C. Geiger, “Mobile augmented reality illustrations that entertain and inform with the hololens,†SIGGRAPH Asia 2017 Mob. Graph. Interact. Appl. - SA ’17, pp. 1–1, 2017.

F. Pratesi, A. Monreale, F. Giannotti, and D. Pedreschi, “Let’s Cook: An Augmented Reality System Towards Developing Cooking Skills for Children with Cognitive Impairments Eleni,†vol. 2, pp. 142–152, 2018.

I. Fernández Del Amo, J. A. Erkoyuncu, R. Roy, and S. Wilding, “Augmented Reality in Maintenance: An information-centred design framework,†Procedia Manuf., vol. 19, no. 2017, pp. 148–155, 2018.

H. Arshad, S. A. Chowdhury, L. M. Chun, B. Parhizkar, and W. K. Obeidy, “A freeze-object interaction technique for handheld augmented reality systems,†Multimed. Tools Appl., vol. 75, no. 10, pp. 5819–5839, 2016.

M. Billinghurst, “Hands and speech in space: multimodal interaction with augmented reality interfaces,†Proc. 15th ACM Int. Conf. multimodal Interact., no. Mmi, pp. 379–380, 2013.

M. Turk, “Multimodal interaction: A review,†Pattern Recognit. Lett., vol. 36, no. 1, pp. 189–195, 2014.




DOI: http://dx.doi.org/10.18517/ijaseit.8.4-2.6824

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development