How Do EFL Learners Process and Uptake Criterion Automated Corrective Feedback? Insights from Two Case Studies

Document Type : Original Article

Authors

1 Hue University of Foreign Languages and International Studies, Viet Nam

2 School of Languages & Linguistics, Faculty of Arts, The University of Melbourne, Melbourne, Australia

Abstract

Research has suggested that the type of feedback learners receive can impact on whether learners understand the feedback, the extent to which they engage with it, and whether they incorporate it in their revised drafts. However, to date, only a small number of studies have investigated learner engagement with corrective feedback provided by automated writing evaluation tools, and of those few have considered in greater depth the impact of the type of automated feedback on engagement. This multiple-case study examines two EFL learners’ engagement with the two forms of corrective feedback provided by Criterion categorised as generic and specific and factors that can explain the nature of their engagement. Data were collected from learners’ first and revised drafts of multiple essays on Criterion, screencasts of students’ think-aloud procedures while revising essays, and stimulated recall interviews. Findings indicate the learners’ higher uptake rate and more successful error corrections in response to generic versus specific feedback. However, their mental effort expenditure differed when cognitively engaging with the feedback, which could be explained in terms of individual learning goals, feedback quality, and the nature of tagged errors. These findings have relevant implications for utilising automated corrective feedback in L2 writing classes. 

Keywords