Generation of Batik Patterns Using Generative Adversarial Network with Content Loss Weighting

Agus Eko Minarno, Toton Dwi Antoko, Yufis Azhar

Abstract


Every craftsman who draws Batik can also not necessarily draw various types of Batik. On the other hand, it takes a long time ranging from weeks to months, to make Batik. Image generation is regarded as an essential part of the field of computer vision. One of the popular methods includes the Generative Adversarial Network, commonly implemented to generate a new data set from an existing one. One model of the Generative Adversarial Network is BatikGAN SL generating batik images by inserting the two Batik patterns to produce a new Batik image. Currently, the generated Batik image does not maintain the input of the Batik pattern. Therefore, this study proposes a GAN model of BatikGAN SL, with the addition of a content loss function using hyperparameters to weight the content loss function. The content loss function is added from the Neural Transfer Style method. Previously, the style loss function in this method has been implemented in BatikGAN SL, and the dataset consists of Batik patches (326 images) and real Batik (163 images). This paper compares the BatikGAN SL model from previous studies with the BatikGAN SL model by implementing hyperparameters on the content loss function. The evaluation is conducted with FID, containing FID Local and FID Global. The results obtained in this study include a collection of Batik images, test evaluation value of 42 on FID Global and 16 on FID Local. These results are obtained by implementing the content loss function with a weight value of 1.

Keywords


Generative adversarial network; batik GAN SL; image generation; batik.

Full Text:

PDF

References


Y. Azhar, Moch. C. Mustaqim, and A. E. Minarno, “Ensemble convolutional neural network for robust batik classification,†IOP Conference Series: Materials Science and Engineering, vol. 1077, no. 1, p. 012053, Feb. 2021, doi: 10.1088/1757-899X/1077/1/012053.

A. E. Minarno, F. D. S. Sumadi, H. Wibowo, and Y. Munarko, “Classification of batik patterns using K-Nearest neighbor and support vector machine,†Bulletin of Electrical Engineering and Informatics, vol. 9, no. 3, pp. 1260–1267, Jun. 2020, doi: 10.11591/EEI.V9I3.1971.

A. E. Minarno, F. D. S. Sumadi, Y. Munarko, W. Y. Alviansyah, and Y. Azhar, “Image Retrieval using Multi Texton Co-occurrence Descriptor and Discrete Wavelet Transform,†2020 8th International Conference on Information and Communication Technology, ICoICT 2020, Jun. 2020, doi: 10.1109/ICOICT49345.2020.9166361.

A. E. Minarno, Moch. C. Mustaqim, Y. Azhar, W. A. Kusuma, and Y. Munarko, “Deep Convolutional Generative Adversarial Network Application in Batik Pattern Generator,†2021 9th International Conference on Information and Communication Technology (ICoICT), pp. 54–59, Aug. 2021, doi: 10.1109/ICOICT52021.2021.9527514.

G. Tian, Q. Yuan, T. Hu, and Y. Shi, “Auto-generation system based on fractal geometry for batik pattern design,†Applied Sciences (Switzerland), vol. 9, no. 11, 2019, doi: 10.3390/app9112383.

P. D. Kusuma, “Modified sine wave based model in madurese batik pattern generation,†Journal of Theoretical and Applied Information Technology, vol. 97, no. 23, pp. 3557–3569, 2019.

L. A. Gatys, A. S. Ecker, M. Bethge, and C. V Sep, “A Neural Algorithm of Artistic Style,†pp. 3–7.

C. Zhou, Z. Gu, Y. Gao, and J. Wang, “An improved style transfer algorithm using feedforward neural network for real-time image conversion,†Sustainability (Switzerland), vol. 11, no. 20, pp. 1–15, 2019, doi: 10.3390/su11205673.

Y. A. Irawan and A. Widjaja, “Pembangkitan Pola Batik dengan Menggunakan Neural Transfer Style dengan Penggunaan Cost Warna,†Jurnal Teknik Informatika dan Sistem Informasi, vol. 6, no. 2, pp. 324–341, 2020, doi: 10.28932/jutisi.v6i2.2698.

G. Atarsaikhan, B. K. Iwana, and S. Uchida, “Guided neural style transfer for shape stylization,†PLoS ONE, vol. 15, no. 6, 2020, doi: 10.1371/journal.pone.0233489.

H. Kwon, H. Yoon, and K. W. Park, “CAPTCHA image generation: Two-step style-transfer learning in deep neural networks,†Sensors (Switzerland), vol. 20, no. 5, pp. 1–14, 2020, doi: 10.3390/s20051495.

I. J. Goodfellow et al., “Generative adversarial nets,†Advances in Neural Information Processing Systems, vol. 3, no. January, pp. 2672–2680, 2014.

W. Wang, A. Wang, Q. Ai, C. Liu, and J. Liu, “AAGAN: Enhanced Single Image Dehazing with Attention-to-Attention Generative Adversarial Network,†IEEE Access, vol. 7, pp. 173485–173498, 2019, doi: 10.1109/ACCESS.2019.2957057.

O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, “DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better,†Proceedings of the IEEE International Conference on Computer Vision, vol. 2019-Octob, pp. 8877–8886, 2019, doi: 10.1109/ICCV.2019.00897.

Z. Yuan et al., “SARA-GAN: Self-Attention and Relative Average Discriminator Based Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction,†Frontiers in Neuroinformatics, vol. 14, no. November 2020, doi: 10.3389/fninf.2020.611666.

Q. Jin, R. Lin, and F. Yang, “E-WACGAN: Enhanced Generative Model of Signaling Data Based on WGAN-GP and ACGAN,†IEEE Systems Journal, vol. 14, no. 3, pp. 3289–3300, 2020, doi: 10.1109/JSYST.2019.2935457.

M. Abdurrahman, N. H. Shabrina, and D. K. Halim, “Generative adversarial network implementation for batik motif synthesis,†in Proceedings of 2019 5th International Conference on New Media Studies, CONMEDIA 2019, 2019, pp. 63–67. doi: 10.1109/CONMEDIA46929.2019.8981834.

Y. Huang, J. Su, J. Wang, and S. Ji, “BatIK-DG: Improved deblurgan for batik crack pattern generation,†in IOP Conference Series: Materials Science and Engineering, 2020, vol. 790, no. 1. doi: 10.1088/1757-899X/790/1/012034.

W. T. Chu and L. Y. Ko, “BatikGAN: A Generative Adversarial Network for Batik Creation,†in MMArt-ACM 2020 - Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia, 2020, no. 1, pp. 13–18. doi: 10.1145/3379173.3393710.

Y. Gultom, R. J. Masikome, and A. M. Arymurthy, “Batik Classification Using Deep Convolutional Network Transfer,†Jurnal Ilmu Komputer dan Informasi (Journal of a Science and Information), vol. 2, pp. 59–66, 2018.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,†4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, pp. 1–16, 2016.

S. G. K. Patro and K. K. sahu, “Normalization: A Preprocessing Stage,†Iarjset, pp. 20–22, 2015, doi: 10.17148/iarjset.2015.2305.

S. Ganguli, P. Garzon, and N. Glaser, “GeoGAN : A Conditional GAN with Reconstruction and Style Loss to Generate Standard Layer of Maps from Satellite Images,†arXiv, 2019.

J. Xiao, J. Wang, S. Cao, and B. Li, “Application of a Novel and Improved VGG-19 Network in the Detection of Workers Wearing Masks,†Journal of Physics: Conference Series, vol. 1518, no. 1, pp. 0–6, 2020, doi: 10.1088/1742-6596/1518/1/012041.

A. Borji, “Pros and cons of GAN evaluation measures,†Computer Vision and Image Understanding, vol. 179, pp. 41–65, 2019, doi: 10.1016/j.cviu.2018.10.009.




DOI: http://dx.doi.org/10.18517/ijaseit.13.1.16201

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development