Location of Repository

Providing effective feedback on whole-phrase input in computer-assisted language learning

By Alison M.L. Fowler

Abstract

An important advantage of online assessment work is that answer data can be easily stored and later analysed with a view to establishing the efficacy of the assessment methodology. A 5-year study of the effectiveness of online grammar exercises has been carried out at the University of Kent. The exercises featured require input in the form of whole sentences (since this is a more authentic test of language skills than single word input or multiple choice). Error feedback is generic (indicating where errors have occurred) rather than specific (indicating the exact nature of the errors) because the error-diagnosis system has been designed to be completely language independent. The study aimed to gauge whether this type of feedback is effective in terms of enabling students to: · identify the types of mistakes in their input; · rectify the mistakes; · learn from the mistakes and apply that learning to subsequent problems There was initial concern that the generic feedback might not provide enough detail to enable users to understand and correct their errors however extensive use by the Universitys Spanish department has shown that this type of mark-up is very effective. Chapelle (1998) stresses that it is important for learners to be given the opportunity to correct their linguistic errors. Users of this system, having failed to answer a question correctly on their first attempt, are permitted a second attempt. It is abundantly clear from the logged data that where users make mistakes in their first attempt (and they generally do since the material is designed to be testing), there is almost always a significant improvement in attempt two. This alone would be enough to show that the feedback mode is effective. However this was not enough to prove the pedagogical efficacy of this means of exercise presentation. Therefore more detailed analysis was performed. Over several years of trials more than 100,000 answers have been logged and every answer has been analysed. It can be shown that, for well-designed exercises, as students progress through an exercise they improve in three ways: · more questions are answered correctly on the first attempt; · overall questions scores (i.e. the average of 1<sup>s</sup>t and 2<sup>n</sup>d attempts at questions) improve; · thinking time for formulating answers decreases The degree of increase in accuracy and decrease in thinking time is exercise-dependent, but the overall picture shows clearly that the generic, language-independent feedback is indeed effective. Moreover, it is easy to identify poorly designed exercises since they do not exhibit the characteristics listed above

Topics: QA76
Publisher: Professional Development
Year: 2008
OAI identifier: oai:kar.kent.ac.uk:23993

Suggested articles

Preview

Citations

  1. (2007). A new method for parsing student text to support computer-assisted assessment of free text answers.
  2. (2007). A review of technology choice for teaching language skills and areas in the CALL literature. doi
  3. (1975). An extension of the string-to-string correction problem. doi
  4. (1990). An intelligent language tutoring system. doi
  5. (1966). Binary codes capable of correcting deletions, insertions and reversals.
  6. (2006). Bridging the gap between assessment, learning and teaching.
  7. (1997). Direct approaches in L2 instruction: A turning point in communicative language teaching. doi
  8. (1993). Does feedback enhance computer-assisted language learning. doi
  9. (1999). Error diagnosis for language learning systems.
  10. (1991). Focus on form: a design feature in language teaching methodology. In doi
  11. (2003). Focusing on form: Student engagement with teacher feedback. doi
  12. (1999). From the developer to the learner: describing grammar – learning grammar. doi
  13. (1999). Language Awareness: Implications for the Language Curriculum. doi
  14. (2003). Language learning online: designing towards user acceptability. In
  15. (2005). Les enjeux de la création d’un environnement d’apprentissage électronique axé sur la compréhension orale à l’aide du système auteur IDIOMA-TIC. doi
  16. (2003). Lexically driven error detection and correction.
  17. (2003). Linguistic knowledge and reasoning for diagnosis and feedback.
  18. (2003). Multiple learner errors and meaningful feedback,
  19. (1990). Sequence comparison applied to correction and markup of multi-word responses,
  20. (1996). The case against grammar correction in L2 writing classes. doi
  21. (2007). Tool mediation in focus on form activities: case studies in a grammrexploring environment. doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.