An Elastic Frame Rate Up-Conversion for Sequential Omnidirectional Images

Arief Suryadi Satyawan, Salita Ulitia Prini, Syed Abdul Rahman Abu-Bakar, Yusuf Nur Wijayanto

Abstract


A distinct innovation in the development of frame rate up-conversion for an omnidirectional video is presented in this paper. The innovation allows an omnidirectional video to decide the number of omnidirectional frames that the main algorithm can create based on the characteristic of the motion of objects in the omnidirectional video itself. This elastic omnidirectional frame production is achievable since the main algorithm works based on an optical flow method with a self-improvement mechanism. The optical flow concept has proven to be adaptable greatly with the character of such a video. The experiment results, in which ten omnidirectional videos were involved, show that the algorithm's exertion and adaptability named the elastic frame rate up-conversion (E-FRUC) are incredible. This achievement is confirmed by both the excellent visual quality and the high score of the PSNR (33 to 59 dB) of the reconstructed omnidirectional frame (RF). The E-FRUC can generate at least one RF from two consecutive omnidirectional images, although these images are synthetically designed with an ideal motion. The more complex the motion objects condition inside the omnidirectional video is, the more RFs the E-FRUC can create. By using the E-FRUC, the continuity of motion of moving objects can be preserved. Therefore it can be useful to maintain the frame rate of incomplete omnidirectional video frames when such video is played back. Such a condition usually happens when the omnidirectional video is transmitted through error-prone telecommunication networks.

Keywords


Frame rate up-conversion; elasticity; omnidirectional video; optical flow.

Full Text:

PDF

References


J. Zhou, Y. Fu, Y. Yang, and A. T. S. Ho, “Distributed video coding using interval overlapped arithmetic coding,†Signal Process. Image Commun., vol. 76, pp. 118–124, 2019, doi: https://doi.org/10.1016/j.image.2019.03.016.

R. Yang, M. Xu, T. Liu, Z. Wang, and Z. Guan, “Enhancing Quality for HEVC Compressed Videos,†IEEE Trans. Circuits Syst. Video Technol., 2019, doi: 10.1109/TCSVT.2018.2867568.

D. Checa and A. Bustillo, “A review of immersive virtual reality serious games to enhance learning and training,†Multimed. Tools Appl., 2020, doi: 10.1007/s11042-019-08348-9.

A. Habibian, T. Van Rozendaal, J. Tomczak, and T. Cohen, “Video compression with rate-distortion autoencoders,†2019, doi: 10.1109/ICCV.2019.00713.

W. Bao, X. Zhang, L. Chen, L. Ding, and Z. Gao, “High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion,†IEEE Trans. Image Process., 2018, doi: 10.1109/TIP.2018.2825100.

M. Zhang, W. Zhou, H. Wei, X. Zhou, and Z. Duan, “Frame level rate control algorithm based on GOP level quality dependency for low-delay hierarchical video coding,†Signal Process. Image Commun., vol. 88, p. 115964, 2020, doi: https://doi.org/10.1016/j.image.2020.115964.

C.-H. Yeh, J.-R. Lin, M.-J. Chen, C.-H. Yeh, C.-A. Lee, and K.-H. Tai, “Fast prediction for quality scalability of High Efficiency Video Coding Scalable Extension,†J. Vis. Commun. Image Represent., vol. 58, pp. 462–476, 2019, doi: https://doi.org/10.1016/j.jvcir.2018.12.021.

W. Shen, W. Bao, G. Zhai, L. Chen, X. Min, and Z. Gao, “Blurry Video Frame Interpolation,†2020, doi: 10.1109/CVPR42600.2020.00516.

G. G. Lee, C. F. Chen, C. J. Hsiao, and J. C. Wu, “Bi-directional trajectory tracking with variable block-size motion estimation for frame rate up-convertor,†IEEE J. Emerg. Sel. Top. Circuits Syst., 2014, doi: 10.1109/JETCAS.2014.2298923.

A. Jiménez-Moreno, E. Martínez-Enríquez, and F. Díaz-de-María, “Bayesian adaptive algorithm for fast coding unit decision in the High Efficiency Video Coding (HEVC) standard,†Signal Process. Image Commun., vol. 56, pp. 1–11, 2017, doi: https://doi.org/10.1016/j.image.2017.04.004.

H. Liu, R. Xiong, D. Zhao, S. Ma, and W. Gao, “Multiple hypotheses bayesian frame rate up-conversion by adaptive fusion of motion-compensated interpolations,†IEEE Trans. Circuits Syst. Video Technol., 2012, doi: 10.1109/TCSVT.2012.2197081.

Y. Yang, L. Shen, H. Yang, and P. An “A content-based rate control algorithm for screen content video coding,†J. Vis. Commun. Image Represent., vol. 60, pp. 328–338, 2019, doi: https://doi.org/10.1016/j.jvcir.2019.02.031.

Y. Chen, R. Hu, J. Xiao, and Z. Wang, “Multisource surveillance video coding with synthetic reference frame,†J. Vis. Commun. Image Represent., vol. 65, p. 102685, 2019, doi: https://doi.org/10.1016/j.jvcir.2019.102685.

S. J. Yoon, H. H. Kim, and M. Kim, “Hierarchical Extended Bilateral Motion Estimation-Based Frame Rate Upconversion Using Learning-Based Linear Mapping,†IEEE Trans. Image Process., 2018, doi: 10.1109/TIP.2018.2861567.

P. A. Brousseau and S. Roy, “Calibration of axial fisheye cameras through generic virtual central models,†2019, doi: 10.1109/ICCV.2019.00414.

W. Gao and S. Shen, “Dual-fisheye omnidirectional stereo,†2017, doi: 10.1109/IROS.2017.8206587.

S. Ji, Z. Qin, J. Shan, and M. Lu, “Panoramic SLAM from a multiple fisheye camera rig,†ISPRS J. Photogramm. Remote Sens., 2020, doi: 10.1016/j.isprsjprs.2019.11.014.

P. Liu, L. Heng, T. Sattler, A. Geiger, and M. Pollefeys, “Direct visual odometry for a fisheye-stereo camera,†2017, doi: 10.1109/IROS.2017.8205988.

L. F. Posada, A. Velasquez-Lopez, F. Hoffmann, and T. Bertram, “Semantic mapping with omnidirectional vision,†2018, doi: 10.1109/ICRA.2018.8461165.

C. Won, J. Ryu, and J. Lim, “SweepNet: Wide-baseline omnidirectional depth estimation,†2019, doi: 10.1109/ICRA.2019.8793823.

A. S. Satyawan, J. Hara, and H. Watanabe, “Automatic self-improvement scheme in optical flow-based motion estimation for sequential fisheye images,†ITE Trans. Media Technol. Appl., 2019, doi: 10.3169/mta.7.20.

S. Baker and I. Matthews, “Lucas-Kanade 20 years on: A unifying framework,†Int. J. Comput. Vis., 2004, doi: 10.1023/B:VISI.0000011205.11775.fd.

Matlab, “Matlab 2016, Tutorial.†www.mathworks.com/products/matlab.html (accessed Aug. 01, 2018).

Blender, “Blender 2.81, Tutorial.†www.blender.org (accessed Mar. 01, 2018).

N. Asuni and A. Giachetti, “Testimages: A large-scale archive for testing visual devices and basic image processing algorithms,†2014, doi: 10.2312/stag.20141242.

Ricoh, “Ricoh Theta S, User Manual.†theta360.com/en/about/theta/s.html (accessed Jan. 01, 2017).




DOI: http://dx.doi.org/10.18517/ijaseit.12.1.12963

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development