Skip to main content
Article thumbnail
Location of Repository

Pragmatic meta analytic studies: learning the lessons from naturalistic evaluations of multiple cases

By Paul Lam, Carmel McNaught and Kin-Fai Cheng

Abstract

This paper explores the concept of pragmatic meta‐analytic studies in eLearning. Much educational technology literature focuses on developers and teachers describing and reflecting on their experiences. Few connections are made between these experiential ‘stories’. The data set is fragmented and offers few generalisable lessons. The field needs guidelines about what can be learnt from such single‐case reports. The pragmatic meta‐analytic studies described in this paper have two common aspects: (1) the cases are related in some way, and (2) the data are authentic, that is, the evaluations have followed a naturalistic approach. We suggest that examining a number of such cases is best done by a mixed‐methods approach with an emphasis on qualitative strategies. In the paper, we overview 63 eLearning cases. Three main meta‐analytic strategies were used: (1) meta‐analysis of the perception of usefulness across all cases, (2) meta‐analysis of recorded benefits and challenges across all cases, and (3) meta‐analysis of smaller groups of cases where the learning design and/or use of technology are similar. This study indicated that in Hong Kong the basic and non‐interactive eLearning strategies are often valued by students, while their perceptions of interactive strategies that are potentially more beneficial fluctuate. One possible explanation relates to the level of risk that teachers and students are willing to take in venturing into more innovative teaching and learning strategies

Topics: LB Theory and practice of education, LC1022 - 1022.25 Computer-assisted Education
Publisher: Taylor and Francis Ltd
Year: 2008
DOI identifier: 10.1080/09687760802315879
OAI identifier: oai:generic.eprints.org:743/core5

Suggested articles

Citations

  1. (1920). A constant error on psychological rating. doi
  2. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. doi
  3. (2008). A three-layered cyclic model of eLearning development and evaluation.
  4. (2005). Building an evaluation culture and evidence base for elearning in three Hong Kong universities. doi
  5. (2007). Challenging the five-stage model for e-learning: A new approach. doi
  6. (2004). conference,
  7. (1981). Criteria for assessing the trustworthiness of naturalistic enquiries.
  8. (2007). Describing learning activities: Tools and resources to guide practice. In Rethinking pedagogy for a digital
  9. Design and evaluation of online courses containing media-enhanced learning materials. doi
  10. (2006). Developing evidence-based criteria for the design and use of online forums in higher education in Hong Kong. doi
  11. (2001). Diffusion effects: Control group contamination threats to the validity of teacher-administered interventions. doi
  12. (2005). E-learning research: Emerging issues? doi
  13. Evaluating designs for web assisted peer and group assessment. doi
  14. (2004). Evaluating educational websites: A system for multiple websites at multiple universities.
  15. (1994). Evaluating technology-based learning: Which model? In Multimedia in higher education: Designing for change in teaching and learning,
  16. (2003). Evidence-based practice and e-Learning in higher education: Can we and should we? doi
  17. (2003). From MegaWeb to e3Learning: A model of support for university academics to effectively use the Web for teaching and learning.
  18. (2007). Mediating between practitioner and developer communities: The Learning Activity Design in Education experience. doi
  19. (2002). Qualitative versus quantitative research - balancing cost, yield and feasibility. doi
  20. (1994). Relevance reconsidered - towards an agenda for the 21st century: Introduction to special topic issue on relevance research. doi
  21. (2002). Rethinking university teaching. 2nd ed. doi
  22. (1990). The naturalistic approach to learning styles. doi
  23. (1997). The relationship between qualitative and quantitative research: Paradigm loyalty versus methodological eclecticism.
  24. (2003). To control or not to control: The question of whether experimental designs are appropriate for evaluating teaching innovations in higher education. doi
  25. (1993). Using focus groups to facilitate culturally anchored research. doi
  26. What do teachers want to know about their student’s eLearning? A study of 70 evaluation plans.

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.