Facial Skin Type Analysis Using Few-shot Learning with Prototypical Networks

Quan Fong Yeo, Shih Yin Ooi, Ying Han Pang, Ying Huey Gan


Facial skin type analysis is a critical task in several fields, including dermatology, cosmetics, and biometrics, and has been the subject of significant research in recent years. Traditional facial skin type analysis approaches rely on large, labeled datasets, which can be time-consuming and costly to collect. This study proposes a novel few-shot learning (FSL) approach for facial skin type analysis that can accurately classify skin types with limited labeled data. A diverse dataset of facial images with varying skin tones and conditions was curated. The proposed approach leverages pre-trained deep neural networks and an FSL algorithm based on prototypical networks (PNs) and matching networks (MNs) to address the challenge of limited labeled data. Importantly, this study has significant implications for improving access to dermatological care, especially in underserved populations, as many individuals are unaware of their skin type, which can lead to ineffective or even harmful skincare practices. Our approach can help individuals quickly determine their skin type and develop a personalized skincare routine based on their unique skin characteristics. The results of our experiments demonstrate the effectiveness of the proposed approach. PNs achieved the highest accuracy in the 2-way, 10-shot, 15-query scenario with an accuracy of 95.78 ± 2.79%, while MNs achieved the highest accuracy of 90.33 ± 4.10% in the 2-way, 5-shot, 10-query scenario. In conclusion, this study highlights the potential of FSL and deep neural networks to overcome the limitations of traditional approaches to facial skin analysis, offering a promising avenue for future research in this field.


Skin type analysis; skincare products recommendation; deep learning; few-shot learning; convolutional neural network

Full Text:



R. Oliveira, J. Ferreira, L. F. Azevedo, and I. F. Almeida, “An Overview of Methods to Characterize Skin Type: Focus on Visual Rating Scales and Self-Report Instruments,†Cosmetics 2023, Vol. 10, Page 14, vol. 10, no. 1, p. 14, Jan. 2023, doi:10.3390/COSMETICS10010014.

A. V. Rawlings, “Ethnic skin types: Are there differences in skin structure and function?,†Int J Cosmet Sci, vol. 28, no. 2, pp. 79–93, 2006, doi:10.1111/j.1467-2494.2006.00302.x.

I. L. Chan, S. Cohen, M. G. da Cunha, and L. C. Maluf, “Characteristics and management of Asian skin,†Int J Dermatol, vol. 58, no. 2, pp. 131–143, 2019, doi:10.1111/ijd.14153.

B. Jothishankar and S. L. Stein, “Impact of skin color and ethnicity,†Clin Dermatol, vol. 37, no. 5, pp. 418–429, 2019, doi:10.1016/j.clindermatol.2019.07.009.

C. F. Goh, N. Mohamed Faisal, and F. N. Ismail, “Facial Skin Biophysical Profile of Women in Malaysia: Significance of Facial Skincare Product Use,†Skin Pharmacol Physiol, vol. 34, no. 6, pp. 351–362, 2021, doi:10.1159/000514995.

C. F. Goh, “Diversity of Asian skin: A review on skin biophysical properties,†Exp Dermatol, 2023, doi:10.1111/exd.14959.

J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few- shot learning,†in Advances in Neural Information Processing Systems, 2017, pp. 4078–4088.

O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, “Matching networks for one shot learning,†in Advances in Neural Information Processing Systems, 2016, pp. 3637–3645.

P. Tschandl et al., “Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study,†Lancet Oncol, vol. 20, no. 7, pp. 938–947, 2019, doi:10.1016/S1470-2045(19)30333-X.

H. Ibrahim, X. Liu, N. Zariffa, A. D. Morris, and A. K. Denniston, “Health data poverty: an assailable barrier to equitable digital health care,†Lancet Digit Health, vol. 3, no. 4, pp. e260–e265, 2021, doi:10.1016/S2589-7500(20)30317-4.

S. Burgin, N. C. Dlova, and L. A. Goldsmith, “Dermatological education for the 21st century: prioritizing diversity,†British Journal of Dermatology, vol. 184, no. 4, pp. 740–741, 2021, doi:10.1111/bjd.19663.

A. Bouslimani et al., “The impact of skin care products on skin chemistry and microbiome dynamics,†BMC Biol, vol. 17, no. 1, 2019, doi:10.1186/s12915-019-0660-6.

L. Tizek, M. C. Schielein, F. Seifert, T. Biedermann, A. Böhner, and A. Zink, “Skin diseases are more common than we think: screening results of an unreferred population at the Munich Oktoberfest,†Journal of the European Academy of Dermatology and Venereology, vol. 33, no. 7, pp. 1421–1428, 2019, doi:10.1111/jdv.15494.

L. Zhang et al., “The Impact of Routine Skin Care on the Quality of Life,†Cosmetics, vol. 7, no. 3, 2020, doi:10.3390/cosmetics7030059.

M. Nagae, T. Mitsutake, and M. Sakamoto, “Impact of skin care on body image of aging people: A quasi-randomized pilot trial,†Heliyon, vol. 9, no. 2, 2023, doi:10.1016/j.heliyon.2023.e13230.

A. Esteva et al., “Dermatologist-level classification of skin cancer with deep neural networks,†Nature, vol. 542, no. 7639, pp. 115– 118, Feb. 2017, doi:10.1038/nature21056.

A. Mahbod, G. Schaefer, C. Wang, R. Ecker, and I. Ellinge, “Skin Lesion Classification Using Hybrid Deep Neural Networks,†in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, May 2019, pp. 1229–1233. doi:10.1109/ICASSP.2019.8683352.

J. Zhang, Y. Xie, Y. Xia, and C. Shen, “Attention Residual Learning for Skin Lesion Classification,†IEEE Trans Med Imaging, vol. 38, no. 9, pp. 2092–2103, Sep. 2019, doi: 10.1109/TMI.2019.2893944.

N. Zhang, Y.-X. Cai, Y.-Y. Wang, Y.-T. Tian, X.-L. Wang, and B. Badami, “Skin cancer diagnosis based on optimized convolutional neural network,†Artif Intell Med, vol. 102, 2020, doi:10.1016/j.artmed.2019.101756.

P. N. Srinivasu, J. G. SivaSai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, “Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM,†Sensors, vol. 21, no. 8, p. 2852, Apr. 2021, doi:10.3390/s21082852.

A. Esteva et al., “Deep learning-enabled medical computer vision,†NPJ Digit Med, vol. 4, no. 1, p. 5, Jan. 2021, doi:10.1038/s41746- 020-00376-2.

D. Sonntag, F. Nunnari, and H.-J. Profitlich, “The Skincare project, an interactive deep learning system for differential diagnosis of malignant skin lesions. Technical Report,†May 2020, [Online]. Available: http://arxiv.org/abs/2005.09448

W. Zhu, H. Liao, W. Li, W. Li, and J. Luo, Alleviating the Incompatibility Between Cross Entropy Loss and Episode Training for Few-Shot Skin Disease Classification, vol. 12266 LNCS. 2020. doi: 10.1007/978-3-030-59725-2_32.

Y. Wang et al., “Cross-Domain Few-Shot Learning for Rare-Disease Skin Lesion Segmentation,†in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2022, pp. 1086–1090. doi:10.1109/ICASSP43922.2022.9746791.

C. J. Kelly, A. Karthikesalingam, M. Suleyman, G. Corrado, and D. King, “Key challenges for delivering clinical impact with artificial intelligence,†BMC Med, vol. 17, no. 1, 2019, doi:10.1186/s12916- 019-1426-2.

L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,†J Big Data, vol. 8, no. 1, p. 53, Mar. 2021, doi:10.1186/s40537-021- 00444-8.

L. Baumann, “Understanding and Treating Various Skin Types: The Baumann Skin Type Indicator,†Dermatol Clin, vol. 26, no. 3, pp. 359–373, Jul. 2008, doi: 10.1016/J.DET.2008.03.007.

L. S. Baumann et al., “A Validated Questionnaire for Quantifying Skin Oiliness,†Journal of Cosmetics, Dermatological Sciences and Applications, vol. 4, no. 2, pp. 78–84, Mar. 2014, doi:10.4236/JCDSA.2014.42012.

S. I. Cho, D. Kim, H. Lee, T. T. Um, and H. Kim, “Explore highly relevant questions in the Baumann skin type questionnaire through the digital skin analyzer: A retrospective single-center study in South Korea,†J Cosmet Dermatol, vol. 22, no. 11, pp. 3159–3167, 2023, doi:10.1111/jocd.15820.

R. Efata, W. I. Loka, N. Wijaya, and D. Suhartono, “Facial Skin Type Prediction Based on Baumann Skin Type Solutions Theory Using Machine Learning,†TEM Journal, vol. 12, no. 1, pp. 96–103, 2023, doi:10.18421/TEM121-13.

Y. B. Lee et al., “Baumann skin type in the Korean male population,†Ann Dermatol, vol. 31, no. 6, pp. 621–630, 2019, doi:10.5021/ad.2019.31.6.621.

S. Kanezawa, Y.-B. Zhu, and Q. Wang, “Correlation between Chinese Medicine Constitution and Skin Types: A Study on 187 Japanese Women,†Chin J Integr Med, vol. 26, no. 3, pp. 174–179, 2020, doi:10.1007/s11655-019-2709-3.

J. Y. Hong, S. J. Park, S. J. Seo, and K. Y. Park, “Oily sensitive skin: A review of management options,†J Cosmet Dermatol, vol. 19, no. 5, pp. 1016–1020, 2020, doi:10.1111/jocd.13347.

P. Magin, D. Pond, W. Smith, S. Goode, and N. Paterson, “Reliability of skin-type self-assessment: agreement of adolescents’ repeated Fitzpatrick skin phototype classification ratings during a cohort study,†J Eur Acad Dermatol Venereol, vol. 26, no. 11, pp. 1396–1399, Nov. 2012, doi:10.1111/J.1468-3083.2011.04298.X.

V. Gupta and V. K. Sharma, “Skin typing: Fitzpatrick grading and others,†Clin Dermatol, vol. 37, no. 5, pp. 430–436, 2019, doi:10.1016/j.clindermatol.2019.07.010.

O. R. Ware, J. E. Dawson, M. M. Shinohara, and S. C. Taylor, “Racial limitations of fitzpatrick skin type,†Cutis, vol. 105, no. 2, pp. 77–80, 2020.

M. Fors, P. González, C. Viada, K. Falcon, and S. Palacios, “Validity of the Fitzpatrick Skin Phototype Classification in Ecuador,†Adv Skin Wound Care, vol. 33, no. 12, pp. 1–5, Dec. 2020, doi:10.1097/01.ASW.0000721168.40561.a3.

W. E. Robert, “The Roberts Skin Type Classification System,†Journal of Drugs in Dermatology, vol. 7, no. 5, pp. 452–456, May 2008, Accessed: Nov. 02, 2023. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/18505137/

A. Parnami and M. Lee, “Learning from Few Examples: A Summary of Approaches to Few-Shot Learning,†Mar. 2022, Accessed: Nov. 02, 2023. [Online]. Available: http://arxiv.org/abs/2203.04291

Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a Few Examples: A Survey on Few-shot Learning,†ACM Comput Surv, vol. 53, no. 3, 2020, doi:10.1145/3386252.

X. Zhang, C. Wang, Y. Tang, Z. Zhou, and X. Lu, A Survey of Few- Shot Learning and Its Application in Industrial Object Detection Tasks, vol. 880 LNEE. 2022. doi:10.1007/978-981-19-0572-8_81.

K.-L. Zhao, X.-L. Jin, and Y.-Z. Wang, “Survey on Few-shot Learning†Ruan Jian Xue Bao/Journal of Software, vol. 32, no. 2, pp. 349–369, 2021, doi: 10.13328/j.cnki.jos.006138.

E. Bennequin, “Few-Shot Image Classification with Meta- Learning.†Accessed: Nov. 02, 2023. [Online]. Available: https://www.sicara.fr/blog-technique/few-shot-image-classification- meta-learning

J. Vanschoren, “Meta-Learning,†in Automated Machine Learning: Methods, Systems, Challenges, L. and V. J. Hutter Frank and Kotthoff, Ed., Cham: Springer International Publishing, 2019, pp. 35–61. doi:10.1007/978-3-030-05318-5_2.

T. Hospedales, A. Antoniou, P. Micaelli, and A. Storkey, “Meta- Learning in Neural Networks: A Survey,†IEEE Trans Pattern Anal Mach Intell, vol. 44, no. 9, pp. 5149–5169, 2022, doi:10.1109/TPAMI.2021.3079209.

A. Rivolli, L. P. F. Garcia, C. Soares, J. Vanschoren, and A. C. P. L. F. de Carvalho, “Meta-features for meta-learning,†Knowl Based Syst, vol. 240, 2022, doi:10.1016/j.knosys.2021.108101.

K. Lee, S. Maji, A. Ravichandran, and S. Soatto, “Meta-learning with differentiable convex optimization,†in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, pp. 10649–10657. doi:9910.1109/CVPR.2019.01091.

A. Antoniou, A. Storkey, and H. Edwards, “How to train your MAML,†in 7th International Conference on Learning Representations, ICLR 2019, 2019.

K. He, N. Pu, M. Lao, and M. S. Lew, “Few-shot and meta-learning methods for image understanding: a survey,†Int J Multimed Inf Retr, vol. 12, no. 2, 2023, doi:10.1007/s13735-023-00279-4.

T. Hospedales, A. Antoniou, P. Micaelli, and A. Storkey, “Meta- Learning in Neural Networks: A Survey,†IEEE Trans Pattern Anal Mach Intell, vol. 44, no. 9, pp. 5149–5169, 2022, doi: 10.1109/TPAMI.2021.3079209.

M. Huisman, J. N. van Rijn, and A. Plaat, “A survey of deep meta- learning,†Artif Intell Rev, vol. 54, no. 6, pp. 4483–4541, 2021, doi: 10.1007/s10462-021-10004-4.

D. Wang, Y. Cheng, M. Yu, X. Guo, and T. Zhang, “A hybrid approach with optimization-based and metric-based meta-learner for few-shot learning,†Neurocomputing, vol. 349, pp. 202–211, 2019, doi: 10.1016/j.neucom.2019.03.085.

D. Gupta and K. K. Shukla, A Study on Metric-Based and Initialization-Based Methods for Few-Shot Image Classification, vol. 547. 2023. doi:10.1007/978-981-19-6525-8_4.

P. H. Sulibhavi et al., “A Study On Meta Learning Optimization Techniques,†in 2022 IEEE 7th International conference for Convergence in Technology, I2CT 2022, 2022. doi: 10.1109/I2CT54291.2022.9824942.

X. Li, X. Yang, Z. Ma, and J.-H. Xue, “Deep metric learning for few-shot image classification: A Review of recent developments,†Pattern Recognit, vol. 138, 2023, doi: 10.1016/j.patcog.2023.109381.

D. Jung, D. Kang, S. Kwak, and M. Cho, Few-shot Metric Learning: Online Adaptation of Embedding for Retrieval, vol. 13845 LNCS. 2023. doi: 10.1007/978-3-031-26348-4_4.

S. Kim, M. Seo, I. Laptev, M. Cho, and S. Kwak, “Deep metric learning beyond binary supervision,†in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, pp. 2283–2292. doi: 10.1109/CVPR.2019.00239.

J. Tang, D. Li, and Y. Tian, “Image classification with multi-view multi-instance metric learning,†Expert Syst Appl, vol. 189, 2022, doi: 10.1016/j.eswa.2021.116117.

Y. Li, “Supervised Few-Shot Image Segmentation with Deep Metric Learning,†in Proceedings - 2021 International Conference on Electronic Information Technology and Smart Agriculture, ICEITSA 2021, 2021, pp. 431–434. doi: 10.1109/ICEITSA54226.2021.00088.

A. Sherstinsky, “Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network,†Physica D, vol. 404, 2020, doi: 10.1016/j.physd.2019.132306.

A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap, “Meta-Learning with Memory-Augmented Neural Networks,†in 33rd International Conference on Machine Learning, ICML 2016, 2016, pp. 2740–2751.

S. Deng, N. Zhang, J. Kang, Y. Zhang, W. Zhang, and H. Chen, “Meta-learning with dynamic-memory-based prototypical network for few-shot event detection,†in WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining, 2020, pp. 151–159. doi: 10.1145/3336191.3371796.

Y. Pan, T. Yao, Y. Li, Y. Wang, C.-W. Ngo, and T. Mei, “Transferrable prototypical networks for unsupervised domain adaptation,†in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, pp. 2234–2242. doi: 10.1109/CVPR.2019.00234.

T. Gao, X. Han, Z. Liu, and M. Sun, “Hybrid attention-based prototypical networks for noisy few-shot relation classification,†in 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, 2019, pp. 6407–6414.

T. Ko, Y. Chen, and Q. Li, “Prototypical Networks for Small Footprint Text-Independent Speaker Verification,†ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2020, doi:10.1109/icassp40776.2020.9054471.

M. Ren et al., “Meta-learning for semi-supervised few-shot classification,†in 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, 2018.

G. Song, Z. Tao, X. Huang, G. Cao, W. Liu, and L. Yang, “Hybrid Attention-Based Prototypical Network for Unfamiliar Restaurant Food Image Few-Shot Recognition,†IEEE Access, vol. 8, pp. 14893–14900, 2020, doi: 10.1109/ACCESS.2020.2964836.

X. Li, Z. Cao, L. Zhao, and J. Jiang, “ALPN: Active-Learning- Based Prototypical Network for Few-Shot Hyperspectral Imagery Classification,†IEEE Geoscience and Remote Sensing Letters, vol. 19, 2022, doi: 10.1109/LGRS.2021.3101495.

S. Huang, X. Zeng, S. Wu, Z. Yu, M. Azzam, and H.-S. Wong, “Behavior regularized prototypical networks for semi-supervised few-shot image classification,†Pattern Recognit, vol. 112, 2021, doi:10.1016/j.patcog.2020.107765.

B. Xi et al., “Deep Prototypical Networks with Hybrid Residual Attention for Hyperspectral Image Classification,†IEEE J Sel Top Appl Earth Obs Remote Sens, vol. 13, pp. 3683–3700, 2020, doi: 10.1109/JSTARS.2020.3004973.

H. Tang, Y. Li, X. Han, Q. Huang, and W. Xie, “A Spatial-Spectral Prototypical Network for Hyperspectral Remote Sensing Image,†IEEE Geoscience and Remote Sensing Letters, vol. 17, no. 1, pp. 167–171, 2020, doi: 10.1109/LGRS.2019.2916083.

Z. Ji, X. Chai, Y. Yu, Y. Pang, and Z. Zhang, “Improved prototypical networks for few-Shot learning,†Pattern Recognit Lett, vol. 140, pp. 81–87, 2020, doi: 10.1016/j.patrec.2020.07.015.

C. Zhang, J. Yue, and Q. Qin, “Global prototypical network for few-shot hyperspectral image classification,†IEEE J Sel Top Appl Earth Obs Remote Sens, vol. 13, pp. 4748–4759, 2020, doi:10.1109/JSTARS.2020.3017544.

V. Prabhu, A. Kannan, M. Ravuri, M. Chablani, D. Sontag, and X. Amatriain, “Few-shot learning for dermatological disease diagnosis,†in Meta Learning With Medical Imaging and Health Informatics Applications, H. Van Nguyen, R. Summers, and R. Chellappa, Eds., in The MICCAI Society book Series. , Elsevier, 2023, pp. 235–252. doi: 10.1016/B978-0-32-399851-2.00022-3.

K. Mahajan, M. Sharma, and L. Vig, “Meta-DermDiagnosis: Few- Shot Skin Disease Identification using Meta-Learning,†in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, Jun. 2020, pp. 3142– 3151. doi: 10.1109/CVPRW50498.2020.00373.

X.-J. Liu, K. Li, H. Luan, W. Wang, and Z. Chen, “Few-shot learning for skin lesion image classification,†Multimed Tools Appl, vol. 81, no. 4, pp. 4979–4990, Feb. 2022, doi: 10.1007/s11042-021- 11472-0.

Q. Lu, “Asian Cosmetics in Global Market: A Comparative Study of Internationalization of Japanese, Korean, and Chinese Companies,†Harvard Extension School, 2020. Accessed: Nov. 02, 2023. [Online]. Available: https://dash.harvard.edu/handle/1/37365641

L. Baumann, “Baumann Skin Type 10: ORNT | Best Skin Type – Skin Type Solutions.†Accessed: Nov. 02, 2023. [Online]. Available: https://skintypesolutions.com/blogs/skincare/baumann- skin-type-10-ornt?_pos=1&_sid=33db792ba&_ss=r

L. Baumann, “The baumann skin-type indicator: A novel approach to understanding skin type,†in Handbook of Cosmetic Science and Technology, Third Edition, CRC Press, 2009, pp. 29–40.

C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,†J Big Data, vol. 6, no. 1, 2019, doi: 10.1186/s40537-019-0197-0.

S. Ravi and H. Larochelle, “Optimization as a Model for Few-Shot Learning,†in International Conference on Learning Representations, 2017. [Online]. Available: https://openreview.net/forum?id=rJY0-Kcll.

G. Ghiasi, T.-Y. Lin, and Q. V Le, “DropBlock: A regularization method for convolutional networks,†in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., Curran Associates, Inc., 2018. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2018/file/7edcfb2d 8f6a659ef4cd1e6c9b6d7079-Paper.pdf

S. Laenen and L. Bertinetto, “On Episodes, Prototypical Networks, and Few-Shot Learning,†in Advances in Neural Information Processing Systems, 2021, pp. 24581–24592.

O. Knagg, “Advances in few-shot learning: reproducing results in PyTorch | by Oscar Knagg | Towards Data Science.†Accessed: Nov. 02, 2023. [Online]. Available: https://towardsdatascience.com/advances-in-few-shot-learning- reproducing-results-in-pytorch-aba70dee541d

L. Li, X. Mu, S. Li, and H. Peng, “A Review of Face Recognition Technology,†IEEE Access, vol. 8, pp. 139110–139120, 2020, doi:10.1109/access.2020.3011028.

J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “ArcFace: Additive angular margin loss for deep face recognition,†in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, pp. 4685–4694. doi: 10.1109/CVPR.2019.00482.

R. Ranjan, V. M. Patel, and R. Chellappa, “HyperFace: A Deep Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition,†IEEE Trans Pattern Anal Mach Intell, vol. 41, no. 1, pp. 121–135, 2019, doi:10.1109/TPAMI.2017.2781233.

Y. Li, J. Zeng, S. Shan, and X. Chen, “Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism,†IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2439– 2450, 2019, doi: 10.1109/TIP.2018.2886767.

M. Loey, G. Manogaran, M. H. N. Taha, and N. E. M. Khalifa, “A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic,†Measurement (Lond), vol. 167, 2021, doi:10.1016/j.measurement.2020.108288.

R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “Deepfakes and beyond: A Survey of face manipulation and fake detection,†Information Fusion, vol. 64, pp. 131–148, 2020, doi:10.1016/j.inffus.2020.06.014.

A. Kumar, A. Kaur, and M. Kumar, “Face detection techniques: a review,†Artif Intell Rev, vol. 52, no. 2, pp. 927–948, 2019, doi:10.1007/s10462-018-9650-2.

Y. Lim, K.-W. Ng, P. Naveen, and S.-C. Haw, “Emotion Recognition by Facial Expression and Voice: Review and Analysis,†Journal of Informatics and Web Engineering, vol. 1, no. 2, pp. 45–54, Sep. 2022, doi:10.33093/jiwe.2022.1.2.4.

L. F. Barrett, R. Adolphs, S. Marsella, A. M. Martinez, and S. D. Pollak, “Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements,†Psychological Science in the Public Interest, vol. 20, no. 1, pp. 1–68, 2019, doi:10.1177/1529100619832930.

A. Mollahosseini, B. Hasani, and M. H. Mahoor, “AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild,†IEEE Trans Affect Comput, vol. 10, no. 1, pp. 18–31, 2019, doi:10.1109/TAFFC.2017.2740923.

K. Wang, X. Peng, J. Yang, D. Meng, and Y. Qiao, “Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition,†IEEE Transactions on Image Processing, vol. 29, pp. 4057–4069, 2020, doi:10.1109/TIP.2019.2956143.

S. Li and W. Deng, “Reliable crowdsourcing and deep locality- preserving learning for unconstrained facial expression recognition,†IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 356–370, 2019, doi: 10.1109/TIP.2018.2868382.

S. Li and W. Deng, “Deep Facial Expression Recognition: A Survey,†IEEE Trans Affect Comput, vol. 13, no. 3, pp. 1195–1215, 2022, doi: 10.1109/TAFFC.2020.2981446.

Y. Wu and Q. Ji, “Facial Landmark Detection: A Literature Survey,†Int J Comput Vis, vol. 127, no. 2, pp. 115–142, 2019, doi: 10.1007/s11263-018-1097-z.

F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, “Learning to Compare: Relation Network for Few-Shot Learning,†in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 1199–1208. doi: 10.1109/CVPR.2018.00131.

Y. Wang, W.-L. Chao, K. Q. Weinberger, and L. van der Maaten, “SimpleShot: Revisiting Nearest-Neighbor Classification for Few- Shot Learning,†Nov. 2019, Accessed: Nov. 04, 2023. [Online]. Available: https://arxiv.org/abs/1911.04623v2

K. Gao, B. Liu, X. Yu, J. Qin, P. Zhang, and X. Tan, “Deep relation network for hyperspectral image few-shot classification,†Remote Sens (Basel), vol. 12, no. 6, 2020, doi: 10.3390/rs12060923.

DOI: http://dx.doi.org/10.18517/ijaseit.13.6.19040


  • There are currently no refbacks.

Published by INSIGHT - Indonesian Society for Knowledge and Human Development