top of page
Search
Writer's pictureJeffrey Cross

Leveraging Machine Learning for Enhanced Assessment in Engineering Education

The Tuning Test Item Bank Project announces a significant step forward in its mission to improve assessment practices. A recent research paper, titled "Automated Essay Grading of Constructive Response Test Responses for Mechanical Engineering Students," explores the use of Machine Learning (ML) for automated grading within the project. This paper was presented at the prestigious IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE) 2023, held at the University of Auckland.

The paper's presentation at TALE 2023 signifies the project's commitment to sharing its findings with the wider educational assessment community. TALE is a premier conference dedicated to advancing teaching, assessment, and learning in engineering education. Presenting at this forum underscores the project's contribution to the field and its potential impact.

The research offers promising insights into using ML for automated grading within the Tuning Test framework. This approach holds the potential to significantly improve efficiency and consistency in assessment, especially as the project expands its test bank and scales its operations across ASEAN universities. The project team is currently analyzing the paper's findings to evaluate the feasibility of integrating ML into the grading system. Streamlining the grading process through ML can free up resources for further test development and analysis, ultimately strengthening the project's impact on educational assessment in Southeast Asia.

The abstract of the paper is as follows:

Machine learning education related applications have increased with the appearance of large language models. While automatic essay grading (AEG) has been studied extensively in the past, most of these studies have focused on evaluating English competence instead of assessing knowledge competence in an engineering field. This study aimed to develop an AEG model to evaluate student’s mechanical engineering Constructive Response Test (CRT) question responses which were instructor graded. Because of the small number of student responses (45), a synthesized set of responses was also generated by using text-to-text paraphrasing models. A neural network grading engine was built and trained to assess comprehension utilizing the Bidirectional Encoder Representation Transformer (BERT) and related models on student and synthesized responses. This study showed that the AEG based Natural Language Processing (NLP) model showed high accuracy and a higher degree of consistency in grading student responses compared to instructor-graded responses.

D. T. Dung, F. Triawan, H. Mima and J. S. Cross, "Automated Essay Grading of Constructive Response Test Responses for Mechanical Engineering Students," 2023 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), Auckland, New Zealand, 2023, pp. 1-4, doi: 10.1109/TALE56641.2023.10398404.

 

 



3 views0 comments

Recent Posts

See All

Комментарии


bottom of page