55,473 research outputs found
WinEcon Fiscal Pathways: A Computer Based Learning Module for the Subject Macroeconomic Theory and Policy
trade reforms, manufacturing performance, Australia
CAL evaluation: Future directions
Formal, experimental methods have proved increasingly difficult to implement, and lack the capacity to generate detailed results when evaluating the impact of CAL on teaching and learning. The rigid nature of experimental design restricts the scope of investigations and the conditions in which studies can be conducted It has also consistently failed to account for all influences on learning. In innovative CAL environments, practical and theoretical development depends on the ability fully to investigate the wide range of such influences. Over the past five years, a customizable evaluation framework has been developed specifically for CAL research. The conceptual approach is defined as Situated Evaluation of CAL (SECAL), and the primary focus is on quality of learning outcomes. Two important principles underpin this development. First, the widely accepted need to evaluate in authentic contexts includes examination of the combined effects of CAL with other resources and influential aspects of the learning environment. Secondly, evaluation design is based on a critical approach and qualitative, case‐based research. Positive outcomes from applications of SECAL include the easy satisfaction of practical and situation‐specific requirements and the relatively low cost of evaluation studies. Although there is little scope to produce generalizable results in the short term, the difficulty of doing so in experimental studies suggests that this objective is difficult to achieve in educational research. A more realistic, longer‐term aim is the development of grounded theory based on common findings from individual cases
Recommended from our members
Educational Technology Topic Guide
This guide aims to contribute to what we know about the relationship between educational technology (edtech) and educational outcomes by addressing the following overarching question: What is the evidence that the use of edtech, by teachers or students, impacts teaching and learning practices, or learning outcomes? It also offers recommendations to support advisors to strengthen the design, implementation and evaluation of programmes that use edtech.
We define edtech as the use of digital or electronic technologies and materials to support teaching and learning. Recognising that technology alone does not enhance learning, evaluations must also consider how programmes are designed and implemented, how teachers are supported, how communities are developed and how outcomes are measured (see http://tel.ac.uk/about-3/, 2014).
Effective edtech programmes are characterised by:
a clear and specific curriculum focus
the use of relevant curriculum materials
a focus on teacher development and pedagogy
evaluation mechanisms that go beyond outputs.
These findings come from a wide range of technology use including:
interactive radio instruction (IRI)
classroom audio or video resources accessed via teachers’ mobile phones
student tablets and eReaders
computer-assisted learning (CAL) to supplement classroom teaching.
However, there are also examples of large-scale investment in edtech – particularly computers for student use – that produce limited educational outcomes. We need to know more about:
how to support teachers to develop appropriate, relevant practices using edtech
how such practices are enacted in schools, and what factors contribute to or mitigate against
successful outcomes.
Recommendations:
1. Edtech programmes should focus on enabling educational change, not delivering technology. In doing so, programmes should provide adequate support for teachers and aim to capture changes in teaching practice and learning outcomes in evaluation.
2. Advisors should support proposals that further develop successful practices or that address gaps in evidence and understanding.
3. Advisors should discourage proposals that have an emphasis on technology over education, weak programmatic support or poor evaluation.
4. In design and evaluation, value-for-money metrics and cost-effectiveness analyses should be carried out
Facilitating Classroom Economics Experiments with an Emerging Technology: The Case of Clickers
The authors discuss how they used the audience response system (ARS) to facilitate pit market trading in an applied microeconomics class and report the efficacy of the approach. Using the ARS to facilitate active learning by engaging students in economics experiments has pedagogical advantages over both the labor-intensive approach of pencil-and-paper and the capital-intensive route of relying on networked or on-line computer labs which oftentimes preclude or restrict face-to-face student interactions. Thus, the new method of conducting experiments represents an added advantage on top of such conventional functions as taking attendance and administering quizzes of this increasingly popular classroom technology.Teaching/Communication/Extension/Profession,
AN ANALYSIS OF ONLINE EXAMINATIONS IN COLLEGE COURSES
This research evaluates the use of online examinations in college courses from both instructor and student perspectives. Instructional software was developed at Kansas State University to administer online homework assignments and examinations. Survey data were collected from two classes to measure the level of student support for online examinations. The determinants of the level of student support for online testing were identified and quantified using logistic regression analysis.Teaching/Communication/Extension/Profession,
Making Large Classes Small(er): Assessing the Effectiveness Of a Hybrid Teaching Technology
This paper examines learning outcomes in a one-semester introductory microeconomics course where contact time with the instructor was reduced by two-thirds and students were expected to view pre-recorded lectures on-line and come to class prepared to engage in discussion. Students were pre-and post-tested using the Test of Understanding in College Economics (TUCE - 4). Learning outcomes as measured by the change in test scores are found to be as good as or better than calibrating data for groups assessed using the TUCE - 4. In addition to being a more enjoyable course for the instructor, the course design can be part of a more self-directed curriculum that uses available resources more efficiently to achieve similar learning objectives to a lecture-based introductory course.active learning, assessment, computer-assisted instruction, introductory microeconomics
- …