An Automated Statechart Diagram Assessment using Semantic and Structural Similarities

Reza Fauzan, Daniel Siahaan, Siti Rochimah, Evi Triandini

Abstract


The statechart diagram is a behavior diagram in the unified modeling language (UML) diagram. Numerous state chart diagrams are taught in computer science majors. In teaching and learning activities, the assessment process is essential. A teacher is required to be objective in assessing. However, objectivity can be affected by inconsistency and fatigue. Thus, an automatic assessment is very important. Automatic assessments can help teachers save time while assessing answers given by multiple students. By combining semantic and structural similarities, we propose a method to evaluate statechart diagrams automatically. Semantic comparison is conducted based on the lexical information from the states and transitions between the two diagrams. We then use a combination of cosine similarity, Wu palmer, and WordNet to assess the semantic similarity between the two diagrams. The structural assessment is conducted on the basis of the structure of the two diagrams using the greedy graph edit distance. The diagram structure is obtained by translating the diagram into several graphs. The graph is divided into two types of subgraphs, namely intraSim subgraph and interSim subgraph. Further, our results demonstrate that the proposed method agrees well with the state chart diagram assessed by the teacher. The agreement value between the teacher and our proposed method is an almost perfect agreement. In the assessment process, we observe that teachers see the structure of the statechart diagram instead of the lexical of the statechart diagram.

Keywords


Automatic assessment; semantic assessment; statechart diagram; structural assessment; UML assessment.

Full Text:

PDF

References


B. Rumpe, Agile modeling with UML: Code generation, testing, refactoring. 2017.

H. Kaur and A. Sharma, “ANOVA Based Significance Testing of Non-functional Requirements in Software Engineering,†Int. J. Inf. Technol. Proj. Manag., vol. 10, no. 4, pp. 100–117, 2019, doi: 10.4018/IJITPM.2019100104.

I. Salman, B. Turhan, and S. Vegas, A controlled experiment on time pressure and confirmation bias in functional software testing, vol. 24, no. 4. Empirical Software Engineering, 2019.

P. Chung and B. Gaiman, “Use of State Diagrams to Engineer Communications Software,†in Conference of International Conference on Software Engineering, 2017, pp. 215–221.

A. Ramadhan and B. Susetyo, “Classification Modelling of Random Forest to Identify the Important Factors in Improving the Quality of Education,†Int. J. Adv. Sci. Eng. Inf. Technol., vol. 11, no. 2, pp. 501–507, 2021.

Agusriandi, I. S. Sitanggang, and S. H. Wijaya, “Student Performance Based on Activity Log on Social Network and e-Learning,†Int. J. Adv. Sci. Eng. Inf. Technol., vol. 10, no. 6, pp. 2276–2281, 2020, doi: 10.18517/ijaseit.10.6.8753.

J. R. Rico-Juan, A. J. Gallego, and J. Calvo-Zaragoza, “Automatic detection of inconsistencies between numerical scores and textual feedback in peer-assessment processes with machine learning,†Comput. Educ., vol. 140, no. February, p. 103609, 2019, doi: 10.1016/j.compedu.2019.103609.

V. Vachharajani and J. Pareek, “Framework to approximate label matching for automatic assessment of use-case diagram,†Int. J. Distance Educ. Technol., vol. 17, no. 3, pp. 75–95, 2019, doi: 10.4018/IJDET.2019070105.

R. Fauzan, D. Siahaan, S. Rochimah, and E. Triandini, “Use case diagram similarity measurement: A new approach,†in International Conference on Information and Communication Technology and Systems, 2019, pp. 3–7.

Z. Yuan, L. Yan, and Z. Ma, “Structural similarity measure between UML class diagrams based on UCG ,†Requir. Eng., no. 0123456789, pp. 1–17, 2019.

H. Storrle, “Towards Clone Detection in UML Domain Models,†Softw. Syst. Model., vol. 12, no. 2, pp. 307–329, 2013.

S. Van Mierlo and H. Vangheluwe, “Introduction to Statecharts Modelling, Simulation, Testing, and Deployment,†in Proceedings of the 2018 Winter Simulation Conference, 2018, pp. 306–320, doi: 10.1017/CBO9781107415324.004.

C. Li, L. Huang, J. Ge, B. Luo, and V. Ng, “Automatically classifying user requests in crowdsourcing requirements engineering,†J. Syst. Softw., vol. 138, pp. 108–123, 2018, doi: 10.1016/j.jss.2017.12.028.

S. Al Tahat and K. Ahmad, “Lexical Disambiguation (CKBD): A tool to identify and resolve semantic conflicts using Context Knowledge,†Int. J. Adv. Sci. Eng. Inf. Technol., vol. 9, no. 1, pp. 213–219, 2019, doi: 10.18517/ijaseit.9.1.6387.

G. Mediamer, A. Adiwijaya, and S. Al Faraby, “Development of rule-based feature extraction in multi-label text classification,†Int. J. Adv. Sci. Eng. Inf. Technol., vol. 9, no. 4, pp. 1460–1465, 2019, doi: 10.18517/ijaseit.9.4.8894.

P. Sunilkumar and A. P. Shaji, “A Survey on Semantic Similarity,†in 2019 6th IEEE International Conference on Advances in Computing, Communication and Control, ICAC3 2019, 2019, pp. 1–8.

B. Sathiya and T. V. Geetha, “A review on semantic similarity measures for ontology,†J. Intell. Fuzzy Syst., vol. 36, no. 4, pp. 3045–3059, 2019, doi: 10.3233/JIFS-18120.

Y. Y. Lee, H. Ke, T. Y. Yen, H. H. Huang, and H. H. Chen, “Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement,†J. Assoc. Inf. Sci. Technol., vol. 71, no. 6, pp. 657–670, 2020, doi: 10.1002/asi.24289.

X. Zhang, S. Sun, and K. Zhang, “A new hybrid improved method for measuring concept semantic similarity in wordnet,†Int. Arab J. Inf. Technol., vol. 17, no. 4, pp. 433–439, 2020, doi: 10.34028/iajit/17/4/1.

D. D. Prasetya, A. P. Wibawa, and T. Hirashima, “The performance of text similarity algorithms,†Int. J. Adv. Intell. Informatics, vol. 4, no. 1, pp. 63–69, 2018, doi: 10.26555/ijain.v4i1.152.

S. Likavec, I. Lombardi, and F. Cena, “Sigmoid similarity - a new feature-based similarity measure,†Inf. Sci. (Ny)., vol. 481, pp. 203–218, 2019, doi: 10.1016/j.ins.2018.12.018.

W. J. Park and D. H. Bae, “A two-stage framework for UML specification matching,†Inf. Softw. Technol., vol. 53, no. 3, pp. 230–244, 2011.

H. O. Salami and M. Ahmed, “A framework for reuse of multi-view UML artifacts,†Int. J. Soft Comput. Softw. Eng. [JSCSE], vol. 3, no. 3, pp. 156–162, 2013.

A. Adamu, W. Mohd, and N. Wan, “Matching and Retrieval of State Machine Diagrams from Software Repositories Using Cuckoo Search Algorithm,†in 8th International Conference on Information Technology (ICIT), 2017, pp. 187–192.

D. B. Blumenthal and J. Gamper, “On the exact computation of the graph edit distance,†Pattern Recognit. Lett., vol. 134, pp. 46–57, 2020, doi: 10.1016/j.patrec.2018.05.002.

D. B. Blumenthal, N. Boria, J. Gamper, S. Bougleux, and L. Brun, “Comparing heuristics for graph edit distance computation,†VLDB J., vol. 29, no. 1, pp. 419–458, 2020, doi: 10.1007/s00778-019-00544-1.

D. A. Rachkovskij, “Fast Similarity Search for Graphs by Edit Distance,†Cybern. Syst. Anal., vol. 55, no. 6, pp. 1039–1051, 2019, doi: 10.1007/s10559-019-00213-9.

K. Riesen, M. Ferrer, and H. Bunke, “Approximate Graph Edit Distance in Quadratic Time,†IEEE/ACM Trans. Comput. Biol. Bioinforma., vol. 13, no. 9, 2015, doi: 10.1109/TCBB.2015.2478463.

K. Riesen, M. Ferrer, R. Dornberger, and H. Bunke, “Greedy Graph Edit Distance,†in Machine Learning and Data Mining in Pattern Recognition, 2015, vol. 9166, pp. 3–16, doi: 10.1007/978-3-319-21024-7.

S. Ferrán, A. Beghelli, G. Huerta-Cánepa, and F. Jensen, “Correctness assessment of a crowdcoding project in a computer programming introductory course,†Comput. Appl. Eng. Educ., vol. 26, no. 1, pp. 162–170, 2018, doi: 10.1002/cae.21868.

F. Restrepo-Calle, J. J. Ramírez Echeverry, and F. A. González, “Continuous assessment in a computer programming course supported by a software tool,†Comput. Appl. Eng. Educ., vol. 27, no. 1, pp. 80–89, 2019, doi: 10.1002/cae.22058.

M. Aleyaasin, “Digital assessment of individual engineering assignments in mass courses,†Comput. Appl. Eng. Educ., vol. 26, no. 5, pp. 1888–1893, 2018, doi: 10.1002/cae.22014.

D. Galan, R. Heradio, H. Vargas, I. Abad, and J. A. Cerrada, “Automated Assessment of Computer Programming Practices: The 8-Years UNED Experience,†IEEE Access, vol. 7, pp. 130113–130119, 2019, doi: 10.1109/ACCESS.2019.2938391.

S. Nurhayati and J. Purwanto, “Chatbot Based Applications on Smart Home Use Natural Language Processing,†Int. J. Adv. Sci. Eng. Inf. Technol., vol. 11, no. 2, pp. 581–588, 2021.

R. Fauzan, “Use Case Diagram Similarity Measurement : A New,†in 2019 12th International Conference on Information & Communication Technology and System (ICTS), 2019, pp. 3–7.

E. Triandini, R. Fauzan, D. O. Siahaan, and S. Rochimah, “Sequence Diagram Similarity Measurement: A Different Approach,†in JCSSE 2019 - 16th International Joint Conference on Computer Science and Software Engineering: Knowledge Evolution Towards Singularity of Man-Machine Intelligence, 2019, pp. 348–351.

R. Fauzan, D. Siahaan, S. Rochimah, and E. Triandini, “Activity diagram similarity measurement: A different approach,†in 2018 International Seminar on Research of Information Technology and Intelligent Systems, ISRITI 2018, 2018, pp. 601–605.

R. Fauzan, D. Siahaan, S. Rochimah, and E. Triandini, “Automated Class Diagram Assessment using Semantic and Structural Similarities,†Int. J. Intell. Eng. Syst., vol. 14, no. 2, 2021.

R. Fauzan, D. Siahaan, S. Rochimah, and E. Triandini, “A Different Approach on Automated Use Case Diagram Semantic Assessment,†Int. J. Intell. Eng. Syst., vol. 14, no. 1, pp. 496–505, Feb. 2021, doi: 10.22266/ijies2021.0228.46.

R. Fauzan, D. Siahaan, S. Rochimah, and E. Triandini, “A Novel Approach to Automated Behavioral Diagram Assessment using Label Similarity and Subgraph Edit Distance,†Comput. Sci., vol. 22, no. 2, 2021.

N. A. Rakhmawati, A. A. Firmansyah, P. M. Effendi, R. Abdillah, and T. A. Cahyono, “Auto Halal detection products based on euclidian distance and cosine similarity,†Int. J. Adv. Sci. Eng. Inf. Technol., vol. 8, no. 4–2, pp. 1706–1711, 2018, doi: 10.18517/ijaseit.8.4-2.7083.

C. Jebarathinam, D. Home, and U. Sinha, “Pearson correlation coefficient as a measure for certifying and quantifying high-dimensional entanglement,†Phys. Rev. A, vol. 101, no. 2, pp. 1–18, 2020, doi: 10.1103/PhysRevA.101.022112.

D. Edelmann, T. F. Móri, and G. J. Székely, “On relationships between the Pearson and the distance correlation coefficients,†Stat. Probab. Lett., vol. 169, p. 108960, 2021, doi: 10.1016/j.spl.2020.108960.

J. P. B. Mapetu, L. Kong, and Z. Chen, A dynamic VM consolidation approach based on load balancing using Pearson correlation in cloud computing, no. 0123456789. Springer US, 2020.

S. Lisawadi, S. E. Ahmed, O. Reangsephet, and M. K. A. Shah, “Simultaneous estimation of Cronbach’s alpha coefficients,†Commun. Stat. - Theory Methods, vol. 48, no. 13, pp. 3236–3257, 2019, doi: 10.1080/03610926.2018.1473882.

K. S. Taber, “The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education,†Res. Sci. Educ., vol. 48, no. 6, pp. 1273–1296, 2018, doi: 10.1007/s11165-016-9602-2.

F. Garcia-Loro, S. Martin, J. A. Ruipérez-Valiente, E. Sancristobal, and M. Castro, “Reviewing and analyzing peer review Inter-Rater Reliability in a MOOC platform,†Comput. Educ., vol. 154, no. September 2019, 2020, doi: 10.1016/j.compedu.2020.103894.

J. Jirschitzka, A. Oeberst, R. Göllner, and U. Cress, “Inter-rater reliability and validity of peer reviews in an interdisciplinary field,†Scientometrics, vol. 113, no. 2, pp. 1059–1092, 2017, doi: 10.1007/s11192-017-2516-6.

K. N. Bromm, I. M. Lang, E. E. Twardzik, C. L. Antonakos, T. Dubowitz, and N. Colabianchi, “Virtual audits of the urban streetscape: Comparing the inter-rater reliability of GigaPan® to Google Street View,†Int. J. Health Geogr., vol. 19, no. 1, pp. 1–15, 2020, doi: 10.1186/s12942-020-00226-0.

A. M. Jimenez and S. J. Zepeda, “A Comparison of Gwet’s AC1 and Kappa When Calculating Inter-Rater Reliability Coefficients in a Teacher Evaluation Context,†J. Educ. Hum. Resour., p. e20190001, 2020, doi: https://doi.org/10.3138/jehr-2019-0001.

T. Ohyama, “Statistical inference of Gwet’s AC1 coefficient for multiple raters and binary outcomes,†Commun. Stat. - Theory Methods, vol. 0, no. 0, pp. 1–9, 2020, doi: 10.1080/03610926.2019.1708397.

A. Karthikayen and S. Selvakumar Raja, “Gwet kappa reliability factor-based selfish node detection technique for ensuring reliable data delivery in mobile adhoc networks,†J. Comput. Theor. Nanosci., vol. 16, no. 2, pp. 489–495, 2019, doi: 10.1166/jctn.2019.7756.

J. R. R. Landis and G. G. Koch, “The Measurement of Observer Agreement for Categorical Data,†Biometrics, vol. 33, no. 1, p. 159, 1977, doi: 10.2307/2529310




DOI: http://dx.doi.org/10.18517/ijaseit.11.6.13372

Refbacks

  • There are currently no refbacks.



Published by INSIGHT - Indonesian Society for Knowledge and Human Development