Object Searching on Real-Time Video Using Oriented FAST and Rotated BRIEF Algorithm

Faisal Dharma Adhinata, Agus Harjoko, - Wahyono


The pre-processing and feature extraction stages are the primary stages in object searching on video data. Processing video in all frames is inefficient. Frames that have the same information should only be once processed to the next stage. Then, the feature extraction algorithm that is often used to process video frames is SIFT and SURF. The SIFT algorithm is very accurate but slow. On the other hand, the SURF algorithm is fast but less accurate. Therefore, the requirement for keyframe selection and feature extraction methods is fast and accurate in object searching on real-time video. Video is pre-processed by extracting video into frames. Then, the mutual information entropy method is used for keyframe selection. Keyframes are extracted using the ORB algorithm. The multiple object detection in the video is done by clustering on features. The feature extraction results on each cluster are matched with the results of the feature from the query image. Matching results from keyframe on video with the query image is used to retrieve the video's frame information. The experiment shows that keyframe selection is beneficial in real-time video data processing because the keyframe selection speed is faster than feature extraction on each frame. Then, feature extraction using the ORB algorithm results 2 times faster speed results than SIFT and SURF algorithms with values not so different from SIFT algorithm. This study's results can be developed as a security warning system in public places, especially by security in providing evidence of criminal cases from videos.


Object searching; real-time video; keyframe selection; mutual information entropy; ORB.

Full Text:



G. F. Shidik, E. Noersasongko, A. Nugraha, P. N. Andono, J. Jumanto, and E. J. Kusuma, “A systematic review of intelligence video surveillance: Trends, techniques, frameworks, and datasets,” IEEE Access, vol. 7, pp. 170457–170473, 2019, doi: 10.1109/ACCESS.2019.2955387.

X. Huang, D. Mu, and Z. Li, “Intelligent traffic analysis: A heuristic high-dimensional image search algorithm based on spatiotemporal probability for constrained environments,” Alexandria Eng. J., vol. 59, no. 3, pp. 1413–1423, 2020, doi: 10.1016/j.aej.2020.03.045.

M. A. Uddin, A. Alam, N. A. Tu, M. S. Islam, and Y. K. Lee, “SIAT: A distributed video analytics framework for intelligent video surveillance,” Symmetry (Basel)., vol. 11, no. 7, 2019, doi: 10.3390/sym11070911.

E. Cermeño, A. Pérez, and J. A. Sigüenza, “Intelligent video surveillance beyond robust background modeling,” Expert Syst. Appl., vol. 91, pp. 138–149, 2018, doi: https://doi.org/10.1016/j.eswa.2017.08.052.

H. Jabnoun, F. Benzarti, and H. Amiri, “Object recognition for blind people based on features extraction,” Int. Image Process. Appl. Syst. Conf. IPAS 2014, pp. 1–6, 2014, doi: 10.1109/IPAS.2014.7043293.

S. Kannappan, Y. Liu, and B. Tiddeman, “DFP-ALC: Automatic video summarization using Distinct Frame Patch index and Appearance based Linear Clustering,” Pattern Recognit. Lett., vol. 120, pp. 8–16, 2019, doi: 10.1016/j.patrec.2018.12.017.

X. Yan, S. Z. Gilani, M. Feng, L. Zhang, H. Qin, and A. Mian, “Self-supervised learning to detect key frames in videos,” Sensors (Switzerland), vol. 20, no. 23, pp. 1–18, 2020, doi: 10.3390/s20236941.

S. Z. Ouyang, L. Zhong, and R. Q. Luo, “The comparison and analysis of extracting video key frame,” IOP Conf. Ser. Mater. Sci. Eng., vol. 359, no. 1, 2018, doi: 10.1088/1757-899X/359/1/012010.

W. Li, D. Qi, C. Zhang, J. Guo, and J. Yao, “Video summarization based on mutual information and entropy sliding window method,” Entropy, vol. 22, no. 11, pp. 1–16, 2020, doi: 10.3390/e22111285.

C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” pp. 23.1-23.6, 2013, doi: 10.5244/c.2.23.

C. A. V. Hernández and F. A. P. Ortiz, “A corner detector algorithm for feature extraction in simultaneous localization and mapping,” J. Eng. Sci. Technol. Rev., vol. 12, no. 3, pp. 104–113, 2019, doi: 10.25103/jestr.123.15.

D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004, doi: 10.1023/B:VISI.0000029664.99615.94.

J. Mohan and M. S. Nair, “Domain independent redundancy elimination based on flow vectors for static video summarization,” Heliyon, vol. 5, no. 10, p. e02699, 2019, doi: 10.1016/j.heliyon.2019.e02699.

H. Bay, A. Ess, T. Tuytelaars, and L. Vangool, “Speeded-Up Robust Features (SURF) (Cited by: 2272),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359, 2006, doi: 10.1016/j.cviu.2007.09.014.

C. Barajas-García, S. Solorza-Calderón, and E. Gutiérrez-López, “Scale, translation and rotation invariant Wavelet Local Feature Descriptor,” Appl. Math. Comput., vol. 363, 2019, doi: 10.1016/j.amc.2019.124594.

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proc. IEEE Int. Conf. Comput. Vis., pp. 2564–2571, 2011, doi: 10.1109/ICCV.2011.6126544.

C. A. Díaz, D. S. Pérez, H. Miatello, and F. Bromberg, “Grapevine buds detection and localization in 3D space based on Structure from Motion and 2D image classification,” Comput. Ind., vol. 99, no. September 2017, pp. 303–312, 2018, doi: 10.1016/j.compind.2018.03.033.

F. Previtali, P. Bertolazzi, G. Felici, and E. Weitschek, “A novel method and software for automatically classifying Alzheimer’s disease patients by magnetic resonance imaging analysis,” Comput. Methods Programs Biomed., vol. 143, pp. 89–95, 2017, doi: https://doi.org/10.1016/j.cmpb.2017.03.006.

S. M. T. Toapanta, A. A. C. Cruz, L. E. M. Gallegos, and J. A. O. Trejo, “Algorithms for efficient biometric systems to mitigate the integrity of a distributed database,” CITS 2018 - 2018 Int. Conf. Comput. Inf. Telecommun. Syst., pp. 1–5, 2018, doi: 10.1109/CITS.2018.8440190.

H. Yu and L. Kong, “An Optimization of Video Sequence Stitching Method,” 2018 IEEE Int. Conf. Mechatronics Autom., pp. 387–391, 2018.

M. Chen, X. Han, H. Zhang, G. Lin, and M. M. Kamruzzaman, “Quality-guided key frames selection from video stream based on object detection,” J. Vis. Commun. Image Represent., vol. 65, p. 102678, 2019, doi: https://doi.org/10.1016/j.jvcir.2019.102678.

Y. Chen and G. Liu, “Content adaptive Lagrange multiplier selection for rate-distortion optimization in 3-D wavelet-based scalable video coding,” Entropy, vol. 20, no. 3, 2018, doi: 10.3390/e20030181.

F. Hidalgo and T. Bräunl, “Evaluation of several feature detectors/extractors on underwater images towards vslam,” Sensors (Switzerland), vol. 20, no. 15, pp. 1–16, 2020, doi: 10.3390/s20154343.

T. Rao and T. Ikenaga, “Quadrant segmentation and ring-like searching based FPGA implementation of ORB matching system for Full-HD video,” Proc. 15th IAPR Int. Conf. Mach. Vis. Appl. MVA 2017, pp. 89–92, 2017, doi: 10.23919/MVA.2017.7986797.

Y. Tian and Y. Yokota, “Estimating the major cluster by mean-shift with updating kernel,” Mathematics, vol. 7, no. 9, pp. 1–25, 2019, doi: 10.3390/math7090771.

W. Chojnacki, Z. L. Szpak, and M. Wadenbäck, “The equivalence of two definitions of compatible homography matrices,” Pattern Recognit. Lett., vol. 135, pp. 38–43, 2020, doi: https://doi.org/10.1016/j.patrec.2020.03.033.

D. Ane Delphin, M. R. Bhatt, and D. Thiripurasundari, “Holoentropy measures for image stitching of scenes acquired under CAMERA unknown or arbitrary positions,” J. King Saud Univ. - Comput. Inf. Sci., 2018, doi: 10.1016/j.jksuci.2018.08.006.

M. Mateu-Mateus, F. Guede-Fernández, N. Rodriguez-Ibáñez, M. A. García-González, J. Ramos-Castro, and M. Fernández-Chimeno, “A non-contact camera-based method for respiratory rhythm extraction,” Biomed. Signal Process. Control, vol. 66, p. 102443, 2021, doi: https://doi.org/10.1016/j.bspc.2021.102443.

Q. shu Qian, Y. hua Hu, N. xiang Zhao, M. le Li, and F. cai Shao, “Summed volume region selection based three-dimensional automatic target recognition for airborne LIDAR,” Def. Technol., vol. 16, no. 3, pp. 535–542, 2020, doi: 10.1016/j.dt.2019.10.011.

M. R. Purohit and A. R. Yadav, “Comparative Analysis of Features Extraction Methods for Detection Traffic Rule Violation on Raspberry Pi Hardware,” vol. 29, no. 3, pp. 10953–10963, 2020.

DOI: http://dx.doi.org/10.18517/ijaseit.11.6.12043


  • There are currently no refbacks.

Published by INSIGHT - Indonesian Society for Knowledge and Human Development