20,924 research outputs found

    System upgrade: realising the vision for UK education

    Get PDF
    A report summarising the findings of the TEL programme in the wider context of technology-enhanced learning and offering recommendations for future strategy in the area was launched on 13th June at the House of Lords to a group of policymakers, technologists and practitioners chaired by Lord Knight. The report – a major outcome of the programme – is written by TEL director Professor Richard Noss and a team of experts in various fields of technology-enhanced learning. The report features the programme’s 12 recommendations for using technology-enhanced learning to upgrade UK education

    Cognitive and affective perspectives on immersive technology in education

    Get PDF
    This research explains the rationale behind the utilization of mobile learning technologies. It involves a qualitative study among children to better understand their opinions and perceptions toward the use of educational applications (apps) that are available on their mobile devices, including smartphones and tablets. The researchers organized semi-structured, face-to-face interview sessions with primary school students who were using mobile technologies at their primary school. The students reported that their engagement with the educational apps has improved their competencies. They acquired relational and communicative skills as they collaborated in teams. On the other hand, there were a few students who were not perceiving the usefulness and the ease of use of the educational apps on their mobile device. This study indicates that the research participants had different skillsets as they exhibited different learning abilities. In conclusion, this contribution opens-up avenues for future research in this promising field of study.peer-reviewe

    Beyond model answers: learners’ perceptions of self-assessment materials in e-learning applications

    Get PDF
    The importance of feedback as an aid to self‐assessment is widely acknowledged. A common form of feedback that is used widely in e‐learning is the use of model answers. However, model answers are deficient in many respects. In particular, the notion of a ‘model’ answer implies the existence of a single correct answer applicable across multiple contexts with no scope for permissible variation. This reductive assumption is rarely the case with complex problems that are supposed to test students’ higher‐order learning. Nevertheless, the challenge remains of how to support students as they assess their own performance using model answers and other forms of non‐verificational ‘feedback’. To explore this challenge, the research investigated a management development e‐learning application and investigated the effectiveness of model answers that followed problem‐based questions. The research was exploratory, using semi‐structured interviews with 29 adult learners employed in a global organisation. Given interviewees’ generally negative perceptions of the model‐answers, they were asked to describe their ideal form of self‐assessment materials, and to evaluate nine alternative designs. The results suggest that, as support for higher‐order learning, self‐assessment materials that merely present an idealised model answer are inadequate. As alternatives, learners preferred materials that helped them understand what behaviours to avoid (and not just ‘do’), how to think through the problem (i.e. critical thinking skills), and the key issues that provide a framework for thinking. These findings have broader relevance within higher education, particularly in postgraduate programmes for business students where the importance of prior business experience is emphasised and the profile of students is similar to that of the participants in this research

    Assessing computational thinking process using a multiple evaluation approach

    Get PDF
    This study explored the ways that the Computational Thinking (CT) process can be evaluated in a classroom environment. Thirty Children aged 10–11 years, from a primary school in London took part in a game-making project using the Scratch and Alice 2.4 applications for eight months. For the focus of this specific paper, data from participant observations, informal conversations, problem-solving sheets, semi-structured interviews and children’s completed games were used to make sense of elements of the computational thinking process and approaches to evaluate these elements in a computer game design context. The discussions around what CT consists, highlighted the complex structure of computational thinking and the interaction between the elements of artificial intelligence (AI), computer, cognitive, learning and psychological sciences. This also emphasised the role of metacognition in the Computational Thinking process. These arguments illustrated that it is not possible to evaluate Computational Thinking using only programming constructs, as CT process provides opportunities for developing many other skills and concepts. Therefore a multiple evaluation approach should be adopted to illustrate the full learning scope of the Computational Thinking Process. Using the support of literature review and the findings of the data analysis I proposed a multiple approach evaluation model where ‘computational concepts’, ‘metacognitive practices’, and ‘learning behaviours’ were discussed as the main elements of the CT process. Additionally, in order to investigate these dimensions within a game-making context, computer game design was also included in this evaluation model

    Student Learning-Game Designs:Emerging Learning Trajectories

    Get PDF
    corecore