An Application on Visual Cognitive Assistance system for Braille Block Recognition Based on Raspberry Pi System
Assistive products for blind and vision-impaired persons - Tactile walking surface indicators, ISO 23599, 2012.
Braille Blocks for the Visually Impaired, KS F 4561, Korean Agency for Technology and Standard, 2013.
M. C. Ghilardi, R. C. Macedo, I. H. Manssour, “A new Approach for Automatic Detection of Tactile Paving Surfaces in Sidewalks,” Procedia Computer Science, vol. 80, pp. 662–672, Jun. 2016.
K. Zhou, C, Hu, H. Zhang, Y. Hu, B. Xie, “Why do we hardly see people with visual impairments in the street? A case study of Changsha, China,” Applied Geography, vol. 110, article 102043, Sep. 2019.
D. Kanoulas, N. G. Tsagarakis, M. Vona, “Curved patch mapping and tracking for irregular terrain modeling: Application to bipedal robot foot placement,” Robotics and Autonomous Systems, vol. 119, pp. 13-30, Sep. 2019.
Infrastructure and Transport, Actual Condition Survey on Mobility Conveniences for Transportation Poor, Ministry of Land, South Korea, 2013.
V. Tink S. Porritt, D. Allinson, D. Loveday, “Measuring and mitigating overheating risk in solid wall dwellings retrofitted with internal wall insulation,” Building and Environment, vol. 141, pp. 247-261, Aug. 2018.
M. Lee, I. Park, “Performance evaluation of local descriptors for maxi-mally stable extremal regions,” Journal of Visual Communication and Image Representation, vol. 47, pp. 62-72, Aug. 2017.
L. M. Dang, S. I. Hassan, S. Im, I. Mehmood, H. Moon, “Utilizing text recognition for the defects extraction in sewers CCTV inspection videos,” Computers in Industry, vol. 99, pp. 96-109, Aug. 2018.
G. A. Robby, A. Tandra, I. Susanto, J. Harefa, A. Chowand, “Implementation of Optical Character Recognition using Tesseract with the Javanese Script Target in Android Application,” Procedia Computer Science, vol. 157, pp. 499-505, Oct. 2019, DOI: 10.1016/j.procs.2019.09.006, [Online].
C. Clausner, S. Pletschacher, A. Antonacopoulos, “Flexible character accuracy measure for reading-order-independent evaluation,” Pattern Recognition Letters, vol. 131, pp. 390-397, Mar. 2020, DOI: /10.1016/j.patrec.2020.02.003, [Online].
S. Yousfi, S. Berrani, C. Garcia, “Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos,” Pattern Recognition, vol. 64, pp. 245-254, Apr. 2017.
V. C. Kieu, F. Cloppet, N. Vincent, “Adaptive fuzzy model for blur estimation on document images,” Pattern Recognition Letters, vol. 86, pp. 42-48, Jan. 2017.
T. Froneman, D. Heever, Kiran Dellimore, “Development of a wearable support system to aid the visually impaired in independent mobilization and navigation”, in Proc. IEEE EMBC, Jeju Island, Korea, 2017, pp. 783-786.
K. Y. Chan, U. Engelke, N. Abhayasinghe, “An edge detection framework conjoining with IMU data for assisting indoor navigation of visually impaired persons,” Expert Systems with Applications, vol. 67, pp. 272-284, Jan. 2017.
R. Tapu, B. Mocanu, T. Zaharia, “Wearable assistive devices for visually impaired: A state of the art survey,” Pattern Recognition Letters, vol. 137, pp. 37-52, Sep. 2020.
A. P. Finn, F. Tripp, D. Whitaker, L. Vajzovic, “Synergistic Visual Gains Attained using Argus II Retinal Prosthesis with OrCam MyEye,” Ophthalmology Retina, vol. 2, pp. 382-384, Apr. 2018.
N. Niknejad, W. B. Ismail, A. Mardani, H. Liao, I. Ghani, “A comprehensive overview of smart wearables: The state of the art literature, recent advances, and future challenges,” Engineering Applications of Artificial Intelligence, vol. 90, article 103529, Apr. 2020.
MIT News, “Wearable system helps visually impaired users navigate”, May, 2017. [Online]. Available: http://news.mit.edu/2017/wearable-visually-impaired-users-navigate-0531.
M. C. Rodriguez-Sanchez, J. Martinez-Romo, “GAWA – Manager for accessibility Wayfinding apps,” International Journal of Information Management, vol. 37, no. 6, pp. 505-519, Dec. 2017.
S. Spagnol, R. Hoffmann, M. H. Martínez, R. Unnthorsson, “Blind wayfinding with physically-based liquid sounds,” International Journal of Human-Computer Studies, vol. 115, pp. 9-19, Jul. 2018.
M. Murata, D. Ahmetovic, D. Sato, H. Takagi, K. M. Kitani, C. Asakawa, “Smartphone-based localization for blind navigation in building-scale indoor environments,” Pervasive and Mobile Computing, vol. 57, pp. 14-32, Jul. 2019.
A. Abdurrasyid, I. Indrianto, R. Arianto, “Detection of immovable objects on visually impaired people walking aids”, TELKOMNIKA, Vol.17, no. 2, pp. 580-585, 2019, DOI: /10.12928/telkomnika.v17i2.9933, [Online].
I. Ouali, M. S. Sassi, M. B. Hallima, A. Wali, “A New Architecture based AR for Detection and Recognition of Objects and Text to Enhance Navigation of Visually Impaired People,” Procedia Computer Science, vol. 176, pp. 602-611, Oct. 2020, DOI: 10.1016/j.procs.2020.08.062, [Online].
C. Tsirmpas, A. Rompas, O. Fokou, D. Koutsouris, “An indoor Navigation system for visually impaired and elderly people based on Radio Frequency Identification (RFID),” Information Sciences, vol. 30, pp. 288-305, Nov. 2015.
G. L. Gragnani, S. Bergamaschi, C. Montecucco, “Algorithm for an indoor automatic vehicular system based on active RFIDs,” ICT Express, vol. 3, no. 4, pp. 188-192, Dec. 2017, DOI: 10.1016/j.icte.2017.11.012, [Online].
J. Cho, J. Song, J. Kwok, M. Kang, “Braille Block Detection System using Back-projection in Image Processing”, in Proc. KIIT, Gwangju, Korea, 2018, pp. 299-301.
T. Yoshida, A. Ohya, S. Yuta, “Braille Block Detection for Autonomous Mobile Robot Navigation,” in Proc. IEEE/RSJ ICIRS, Las Vegas, USA, 2000, pp. 633-638.
R. Joshi, S. Yadav, M. Dutta, C. Travieso-Gonzalez, “Efficient Multi-Object Detection and Smart Navigation Using Artificial Intelligence for Visually Impaired People,” Entropy, vol. 22, no. 9, 941, Aug. 2020, DOI: 10.3390/e22090941, [Online].
A. Khalifa, E. Badr, H. Elmahdy “A survey on human detection surveillance systems for Raspberry Pi,” Image and Vision Computing, vol. 85, pp. 1-13, May. 2019.
N. Gupa, A. Agarwal, “Object Identification using Super Sonic Sensor: Arduino Object Radar,” in Proc. SMART, Moradabad, India, 2018, pp. 92-95.
Z. Zhu, Y. Cheng, “Application of attitude tracking algorithm for face recognition based on OpenCV in the intelligent door lock,” Computer Communications, vol. 154, pp. 390-397, Mar. 2020.
K. Jaskolka, J. Seiler, F. Beyer, A. Kaup, “A Python-based laboratory course for image and video signal processing on embedded systems,” Heliyon, vol. 5, e02560, Oct. 2019, DOI: 10.1016/j.heliyon.2019.e02560, [Online].
S. Laaroussi, A. Baataoui, A. Halli, S. Khalid, “A dynamic mosaicking method for finding an optimal seamline with Canny edge detector,” Procedia Computer Science, vol. 148, pp. 618-626, Feb. 2019, DOI: 10.1016/j.procs.2019.01.050, [Online].
Y. Meng, Z. Zhang, H. Yin, T. Ma, “Automatic detection of particle size distribution by image analysis based on local adaptive canny edge detection and modified circular Hough transform,” Micron, vol. 106, pp. 34-41, Mar. 2018.
I. S. Masad, A. Al-Fahoum, I. Abu-Qasmieh, “Automated measurements of lumbar lordosis in T2-MR images using decision tree classifier and morphological image processing,” Engineering Science and Technology, an International Journal, vol. 22, pp. 1207-1034, Aug. 2019, DOI: DOI: 10.1016/j.jestch.2019.03.002, [Online].
Microsoft Speech SDK5.1 Documentation, Microsoft.
- There are currently no refbacks.
Published by INSIGHT - Indonesian Society for Knowledge and Human Development