Skip to main content
Article thumbnail
Location of Repository

A review of computer-assisted assessment

By Gráinne Conole and Bill Warburton

Abstract

Pressure for better measurement of stated learning outcomes has resulted in a demand for more frequent assessment. The resources available are seen to be static or dwindling, but Information and Communications Technology is seen to increase productivity by automating assessment tasks. This paper reviews computer-assisted assessment (CAA) and suggests future developments. A search was conducted of CAA-related literature from the past decade to trace the development of CAA from the beginnings of its large-scale use in higher education. Lack of resources, individual inertia and risk propensity are key barriers for individual academics, while proper resourcing and cultural factors outweigh technical barriers at the institutional level

Topics: LB Theory and practice of education, LC1022 - 1022.25 Computer-assisted Education
Publisher: Taylor and Francis Ltd
Year: 2005
DOI identifier: 10.1080/0968776042000339772
OAI identifier: oai:generic.eprints.org:587/core5

Suggested articles

Citations

  1. (2002). 1) Universities blamed for exam fiasco, Times Higher Education Supplement,
  2. (2004). 12) Litigation fees top £15m as academic disputes grow, Times Higher Education Supplement,
  3. (2000). A model for computer-based assessment: the Catherine wheel principle, Assessment and Evaluation doi
  4. (2002). A summary of methods of item analysis
  5. (2001). A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives
  6. (2001). Academic approaches and attitudes towards CAA: a qualitative study,
  7. (2003). An evaluation of a computer adaptive test in a UK university context,
  8. (2002). Are mathematics exam results affected by the mode of delivery?, doi
  9. (2004). Assessment as a catalyst for innovation, invited keynote and paper for the Quality Assurance Agency for Higher Education, Assessment Workshop 5. Available online at: http:// www.qaa.ac.uk/scottishenhancement/events/default.htm (accessed 10
  10. (1995). Assessment for learning: quality and taxonomies, doi
  11. (2003). Assuring quality computer-based assessment development in UK higher education,
  12. (2003). Authority doi
  13. (2002). Automated free text marking with paperless school,
  14. (2003). Available online at: www.caacentre.ac.uk/dldocs/final_report.pdf (accessed16
  15. (2004). Blueprint for computer-assisted assessment (London, doi
  16. (2003). CAA in UK HEIs: the state of the art,
  17. (2001). CAA must be more than multiple-choice tests for it to be academically credible?,
  18. (2002). Campus wide setup of question mark perception (V2.5) at the Katholieke Universiteit Leuven (Belgium)—facing a large scale implementation,
  19. (2002). Code of practice for the use of information technology (IT) in the delivery of assessments doi
  20. (2000). Computer-assisted assessment centre (TLTP3) update, in:
  21. (2001). Computer-assisted assessment centre update,
  22. (2002). Computer-based testing—building the foundation for future assessment
  23. (2004). Computing Ltd
  24. (1980). Construct validity of free response and machine-scorable forms of a test, doi
  25. (2002). Critical issues in online, large-scale assessment: An exploratory study to identify and refine issues. Unpublished Ph.D. thesis,
  26. (2002). Design requirements of a databank
  27. (2002). End short contract outrage, MPs insist, Times Higher Education Supplement.
  28. (1998). Experiences of assessing LMU students over the web.
  29. (2004). Final report for the Item Banks Infrastructure Study (IBIS)
  30. (2003). Formative and summative confidence-based assessment,
  31. (2003). Guidelines for computer-based testing
  32. (1997). Higher education: a critical business (Buckingham,
  33. (2003). Implementing online assessment in an emerging MLE: a generic guidance document with practical examples
  34. (2004). Implementing perception across a large university: getting it right, Perception European Users Conference (Edinburgh, Question Mark Computing).
  35. (2002). Inexorable and inevitable: the continuing story of technology and assessment. doi
  36. (2004). Informative reports— experiences from the Pass-IT project, in:
  37. (2002). Integrating CAA within the University of Ulster,
  38. (2002). Interoperability with CAA: does it work in practice?,
  39. (2002). Item selection and application in higher education,
  40. (2001). Large scale implementation of question mark perception (V2.5)—experiences at
  41. (2003). Learning technology in transition—from individual enthusiasm to institutional implementation (Swets and Zeitlinger,
  42. (2003). Learning technology in transition—from individual enthusiasm to institutional implementation (Swets and Zeitlinger, The Netherlands).
  43. (1999). Multiple choice for honours-level students?, in:
  44. (2003). Multiple response questions—allowing for chance in authentic assessments,
  45. (1999). Online assessment: creating communities and opportunities, in:
  46. (2003). Piloting summative web assessment in secondary education,
  47. (2002). Principles of assessment
  48. (1987). Processing differences as a function of item difficulty in verbal analogy performance, doi
  49. (2003). QTI Enterprise v1.1 specs, examples and schemas. Available online at: http:// www.imsglobal.org/specificationdownload.cfm (accessed 2
  50. (2001). Question and test interoperability: an update on national and international developments,
  51. (2004). Report on the effectiveness of tools for e-learning, report for the JISC commissioned Research Study on the Effectiveness of Resources, Tools and Support Services used by Practitioners in Designing and Delivering E-Learning Activities.
  52. (1997). Results of a
  53. (2002). Rethinking university teaching a conversational framework for the effective use of learning technologies (2nd edn) doi
  54. (1999). Standards for educational and psychological testing doi
  55. (1998). Student assessment in higher education: a handbook for assessing performance (London,
  56. (1964). Taxonomy of educational objectives the classification of educational goals doi
  57. (1956). Taxonomy of educational objectives: the classification of educational goals. Handbook 1: cognitive domain doi
  58. (1998). Testing medium, validity and test performance,
  59. (2004). The TOIA Project web site. Available online at: http://www.toia.ac.uk (accessed 31
  60. (2002). There’s no confidence in multiple-choice testing,
  61. (2004). Thinking the unthinkable: using project risk management when introducing computer-assisted assessments,
  62. (2001). TLTP85 implementation and evaluation of computer-assisted assessment: final report.
  63. (2002). TRIADS experiences and Developments,
  64. (1998). University of Bath quality audit report. Available online at: http://www.qaa.ac.uk/ revreps/instrev/bath/comms.htm (accessed 4
  65. (2003). User requirements of the ultimate online assessment engine, doi
  66. (2001). Using computer-aided assessment to test higher level learning outcomes,
  67. (2002). Using electronic assessment to measure student performance. The State Education Standard, National Association of State Boards of Education.
  68. (2003). WebCT and online assessment: the best thing since SOAP?,
  69. (2004). What are the affordances of Information and communication technologies?, doi
  70. (2004). What happens when computer-assisted assessment goes wrong?,
  71. (2002). What’s in a name? A new hierarchy for question types,

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.