Visual Commands for Control of Food Assistance Robot

Javier O. Pinzón-Arenas, Robinson Jimenez-Moreno

Abstract


Assistance robots improve people's quality of life in residential and office tasks, especially for people with physical limitations. In the case of the elderly or people with upper limb motor disabilities, an assistance robot for food support is necessary.  This development is based on a mixed environment, a real and virtual environment working interactively.  A camera located in front of the user is used, at a distance of 60 cm, so that it has an excellent visual range to capture the user's hand gestures for the commands. Pattern recognition based on a deep learning algorithm is made with convolutional neural networks to identify the user's hand gestures. This work exposes the network's training and the results of the robot command's execution. A virtual environment is presented in which a robotic arm with a spoon-like effector is used in a machine vision system that allows eight different types of commands to be recognized for the robot by training a faster R-CNN network for which a database of 640 images is used, achieving a degree of system performance of over 95%. The average time in the execution of a cycle from detecting and identify the command gesture to move the robot towards the food and return in front of the user is  21 seconds, making the development useful for real-time applications.

Keywords


Convolutional neural network; faster R-CNN; assistance robot; virtual environment.

Full Text:

PDF

References


Elly A. Konijn, Johan F. Hoorn, Robot tutor and pupils' educational ability: Teaching the times tables, Computers & Education, Volume 157, 2020, 103970, ISSN 0360-1315, https://doi.org/10.1016/j.compedu.2020.103970.

Beiqun Zhao, Hannah M. Hollandsworth, Arielle M. Lee, Jenny Lam, Nicole E. Lopez, Benjamin Abbadessa, Samuel Eisenstein, Bard C. Cosman, Sonia L. Ramamoorthy, Lisa A. Parry, Making the Jump: A Qualitative Analysis on the Transition From Bedside Assistant to Console Surgeon in Robotic Surgery Training, Journal of Surgical Education, Volume 77, Issue 2, 2020, Pages 461-471, ISSN 1931-7204, https://doi.org/10.1016/j.jsurg.2019.09.015.

Giancarlo Albo, Elisa De Lorenzis, Andrea Gallioli, Luca Boeri, Stefano P. Zanetti, Fabrizio Longo, Bernardo Rocco, Emanuele Montanari, Role of Bed Assistant During Robot-assisted Radical Prostatectomy: The Effect of Learning Curve on Perioperative Variables, European Urology Focus, Volume 6, Issue 2, 2020, Pages 397-403, ISSN 2405-4569, https://doi.org/10.1016/j.euf.2018.10.005.

L. Bresler, M. Perez, J. Hubert, J.P. Henry, C. Perrenot, Residency training in robotic surgery: The role of simulation, Journal of Visceral Surgery, Volume 157, Issue 3, Supplement 2, 2020, Pages S123-S129, ISSN 1878-7886, https://doi.org/10.1016/j.jviscsurg.2020.03.006.

Yoji Yamada, Yasuhiro Akiyama, Chapter 14 - Physical Assistant Robot Safety, Editor(s): Jacob Rosen, Peter Walker Ferguson, Wearable Robotics, Academic Press, 2020, Pages 275-299, ISBN 9780128146590, https://doi.org/10.1016/B978-0-12-814659-0.00014-X.

Francesco Lanza, Valeria Seidita, Antonio Chella, Agents and robots for collaborating and supporting physicians in healthcare scenarios, Journal of Biomedical Informatics, Volume 108, 2020, 103483, ISSN 1532-0464, https://doi.org/10.1016/j.jbi.2020.103483.

Jasmin Grischke, Lars Johannsmeier, Lukas Eich, Leif Griga, Sami Haddadin, Dentronics: Towards robotics and artificial intelligence in dentistry, Dental Materials, Volume 36, Issue 6, 2020, Pages 765-778, ISSN 0109-5641, https://doi.org/10.1016/j.dental.2020.03.021.

Ayman A Abu Baker, Yazeed Ghadi, Mobile robot controller using novel hybrid system, International Journal of Electrical and Computer Engineering (IJECE), Vol 10, No 1: February 2020, p. 1027-1034.

Pietro Perconti, Alessio Plebe, Deep learning and cognitive science, Cognition, Volume 203, 2020, 104365, ISSN 0010-0277, https://doi.org/10.1016/j.cognition.2020.104365.

Qing Gao, Jinguo Liu, Zhaojie Ju, Robust real-time hand detection and localization for space human–robot interaction based on deep learning, Neurocomputing, Volume 390, 2020, Pages 198-206, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2019.02.066.

Jiménez-Moreno Robinson, Pinzón-Arenas Javier Orlando, Pachón-Suescún César Giovany, Assistant robot through deep learning, International Journal of Electrical and Computer Engineering (IJECE), Vol 10, No 1: February 2020, p. 1053-1062.

Ivet Rafegas, Maria Vanrell, Luís A. Alexandre, Guillem Arias, Understanding trained CNNs by indexing neuron selectivity, Pattern Recognition Letters, 2019, ISSN 0167-8655. https://doi.org/10.1016/j.patrec.2019.10.013.

Paula Useche, Robinson Jimenez-Moreno, Javier Martinez Baquero, Algorithm of detection, classification and gripping of occluded objects by CNN techniques and Haar classifiers, International Journal of Electrical and Computer Engineering (IJECE), Vol 10, No 5: October 2020, p. 4712-4720.

A-reum Lee, Yongwon Cho, Seongho Jin, Namkug Kim, Enhancement of surgical hand gesture recognition using a capsule network for a contactless interface in the operating room, Computer Methods and Programs in Biomedicine, Volume 190, 2020, 105385, ISSN 0169-2607, https://doi.org/10.1016/j.cmpb.2020.105385.

Safa Ameur, Anouar Ben Khalifa, Med Salim Bouhlel, A novel hybrid bidirectional unidirectional LSTM network for dynamic hand gesture recognition with Leap Motion, Entertainment Computing, Volume 35, 2020, 100373, ISSN 1875-9521, https://doi.org/10.1016/j.entcom.2020.100373.

Linpu Fang, Ningxin Liang, Wenxiong Kang, Zhiyong Wang, David Dagan Feng, Real-time hand posture recognition using hand geometric features and Fisher Vector, Signal Processing: Image Communication, Volume 82, 2020, 115729, ISSN 0923-5965, https://doi.org/10.1016/j.image.2019.115729.

Yifan Zhang, Lei Shi, Yi Wu, Ke Cheng, Jian Cheng, Hanqing Lu, Gesture recognition based on deep deformable 3D convolutional neural networks, Pattern Recognition, Volume 107, 2020, 107416, ISSN 0031-3203, https://doi.org/10.1016/j.patcog.2020.107416.

Adithya V., Rajesh R., A Deep Convolutional Neural Network Approach for Static Hand Gesture Recognition, Procedia Computer Science, Volume 171, 2020, Pages 2353-2361,ISSN 1877-0509, https://doi.org/10.1016/j.procs.2020.04.255.

Edwin Jonathan Escobedo Cardenas, Guillermo Camara Chavez, Multimodal hand gesture recognition combining temporal and pose information based on CNN descriptors and histogram of cumulative magnitudes,Journal of Visual Communication and Image Representation, Volume 71, 2020, 102772, ISSN 1047-3203, https://doi.org/10.1016/j.jvcir.2020.102772.

A. Mariani, E. Pellegrini and E. De Momi, "Skill-oriented and Performance-driven Adaptive Curricula for Training in Robot-Assisted Surgery using Simulators: a Feasibility Study," in IEEE Transactions on Biomedical Engineering, doi: 10.1109/TBME.2020.3011867.

Christian Brecher, Stephan Wein, Xiaomei Xu, Simon Storms, Werner Herfs, Simulation Framework for Virtual Robot Programming in Reconfigurable Production Systems, Procedia CIRP, Volume 86, 2019, Pages 98-103, ISSN 2212-8271, https://doi.org/10.1016/j.procir.2020.01.045.

Handy Wicaksono, Claude Sammut, A cognitive robot equipped with autonomous tool innovation expertise, International Journal of Electrical and Computer Engineering (IJECE), Vol 10, No 2: April 2020, p. 2200-2207.

Ren, Shaoqing, et al. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems. 2015. p. 91-99.

He, Kaiming, et al. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778.




DOI: http://dx.doi.org/10.18517/ijaseit.11.4.12694

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development