19,036 research outputs found

    Assessment @ Bond

    Get PDF

    The evaluation of next generation learning technologies: the case of mobile learning

    Get PDF
    Mobile learning is at a leading edge of learning technologies and is at present characterised by pilots and trials that allow mobile technologies to be tested in a variety of learning contexts. The sustained deployment of mobile learning will depend on these pilots and trials, especially their evaluation methodology and reporting. The paper examines a sample of current evaluation practice, based on evidence drawn from conference proceedings, published case studies, and other accounts from the literature and draws on the authors' work in collecting case studies of mobile learning from a range of recent projects. The issues discussed include the apparent objectives of the documented pilots or trials, the nature of the evaluations, instruments and techniques used, and the presentation of findings. The paper reflects on the quality of evaluation in mobile learning pilots and trials, in the broader context of evolving practices in the evaluation of educational technologies

    A Study on the Construction of Dynamic Assessment Model of College English Flipped Classroom Based on Mobile Learning

    Get PDF
    With the rapid development of the Internet and the popularity of mobile devices, mobile learning and flipped Classroom have been widely used in college English teaching in China. As innovative digital learning and teaching mode, both of them emphasize the autonomy and individuation of learning, which break through the limits of time and space. Therefore, more scientific and accurate assessment model is urgently needed to evaluate the changes of students’ knowledge, skills, learning attitude and other aspects in the whole process of mobile learning. From the perspective of Dynamic Evaluation Theory, this paper abandons the drawbacks of traditional static assessment model and applies the dynamic assessment into every link of mobile learning and flipped classroom. It constructs a diversified dynamic assessment system of college English, comprehensively evaluating students’ academic performance by means of information technology and network resources. After two semesters of experimental study, it terrifies the effectiveness of the new dynamic assessment model based on the data collected in the questionnaires, interview, and the test. The implementation of the innovative assessment model not only urges teachers to improve teaching management, but also enhances students’ learning efficiency and language application ability, thus improving the quality of college English teaching. It opens up a new prospect for the standardization and scientization of college English assessment system

    LSDA responds: towards a unified e-learning strategy

    Get PDF

    Squaring the circle: a new alternative to alternative-assessment

    Get PDF
    Many quality assurance systems rely on high-stakes assessment for course certification. Such methods are not as objective as they might appear; they can have detrimental effects on student motivation and may lack relevance to the needs of degree courses increasingly oriented to vocational utility. Alternative assessment methods can show greater formative and motivational value for students but are not well suited to the demands of course certification. The widespread use of virtual learning environments and electronic portfolios generates substantial learner activity data to enable new ways of monitoring and assessing students through Learning Analytics. These emerging practices have the potential to square the circle by generating objective, summative reports for course certification while at the same time providing formative assessment to personalise the student experience. This paper introduces conceptual models of assessment to explore how traditional reliance on numbers and grades might be displaced by new forms of evidence-intensive student profiling and engagement

    Review of research and evaluation on improving adult literacy and numeracy skills

    Get PDF
    The purposes of this literature review are threefold. First, this review summarises findings of the research from the last decade in six fields identified by the Department for Business, Innovation and Skills (BIS) as critical to its forward planning: (1) the economic, personal and social returns to learning; (2) the quality and effectiveness of provision; (3) the number of learning hours needed for skills gain; (4) learner persistence; (5) the retention and loss of skills over time; (6) the literacy and numeracy skills that are needed. Second, this review assesses this evidence base in terms of its quality and robustness, identifying gaps and recommending ways in which the evidence base can be extended and improved. Thirdly, this review attempts to interpret the evidence base to suggest, where possible, how returns to ALN learning for individuals, employers and wider society might be increased through effective and cost-effective interventions

    English Speaking and Listening Assessment Project - Baseline. Bangladesh

    Get PDF
    This study seeks to understand the current practices of English Language Teaching (ELT) and assessment at the secondary school level in Bangladesh, with specific focus on speaking and listening skills. The study draws upon prior research on general ELT practices, English language proficiencies and exploration of assessment practices, in Bangladesh. The study aims to provide some baseline evidence about the way speaking and listening are taught currently, whether these skills are assessed informally, and if so, how this is done. The study addresses two research questions: 1. How ready are English Language Teachers in government-funded secondary schools in Bangladesh to implement continuous assessment of speaking and listening skills? 2. Are there identifiable contextual factors that promote or inhibit the development of effective assessment of listening and speaking in English? These were assessed with a mixed-methods design, drawing upon prior quantitative research and new qualitative fieldwork in 22 secondary schools across three divisions (Dhaka, Sylhet and Chittagong). At the suggestion of DESHE, the sample also included 2 of the ‘highest performing’ schools from Dhaka city. There are some signs of readiness for effective school-based assessment of speaking and listening skills: teachers, students and community members alike are enthusiastic for a greater emphasis on speaking and listening skills, which are highly valued. Teachers and students are now speaking mostly in English and most teachers also attempt to organise some student talk in pairs or groups, at least briefly. Yet several factors limit students’ opportunities to develop skills at the level of CEFR A1 or A2. Firstly, teachers generally do not yet have sufficient confidence, understanding or competence to introduce effective teaching or assessment practices at CEFR A1-A2. In English lessons, students generally make short, predictable utterances or recite texts. No lessons were observed in which students had an opportunity to develop or demonstrate language functions at CEFR A1-A2. Secondly, teachers acknowledge a washback effect from final examinations, agreeing that inclusion of marks for speaking and listening would ensure teachers and students took these skills more seriously during lesson time. Thirdly, almost two thirds of secondary students achieve no CEFR level, suggesting many enter and some leave secondary education with limited communicative English language skills. One possible contributor to this may be that almost half (43%) of the ELT population are only at the target level for students (CEFR A2) themselves, whilst approximately one in ten teachers (12%) do not achieve the student target (being at A1 or below). Fourthly, the Bangladesh curriculum student competency statements are generic and broad, providing little support to the development of teaching or assessment practices. The introduction and development of effective teaching and assessment strategies at CEFR A1-A2 requires a profound shift in teachers’ understanding and practice. We recommend that: 1. Future sector wide programmes provide sustained support to the develop teachers' competence in teaching and assessment of speaking and listening skills at CEFR A1-A2 2. Options are explored for introducing assessment of these skills in terminal examinations 3. Mechanisms are identified for improving teachers own speaking and listening skills 4. Student competency statements within the Bangladesh curriculum are revised to provide more guidance to teachers and students

    MASELTOV Deliverable Report 7.2: Feedback and Progress Indicators

    Get PDF
    This document explores the range of feedback and progress indicators (FPIs) that can be used to support incidental, mobile learning for the target MASELTOV audience, recent immigrants to the EU. We propose that feedback, and progress indicators (we differentiate between the two) should play an instrumental role in helping learners reflect upon individual, often isolated learning episodes mediated by single MASELTOV services, and enable them to reconceive them as constituting elements of a coherent, larger learning journey. The goal of feedback and progress indicators is to support the motivation for learning and from this the social inclusion of recent immigrants. Our underpinning assumption is that the MASELTOV software designers’ goal should be to encourage not just resolution of immediate challenges (e.g. finding a doctor, translating a sign) but a user’s reflection on their continuing progress towards integration into the host country, including improving their language skills. We define feedback as responses to a learner’s performance against criteria of quality and as a means of directing and encouraging the learner; and progress indicators as responses indicating the current position of a learner within a larger activity or journey (often related to time). Drawing partly from the worlds of web-based language learning and video games, we identify which feedback and progress indicators may best support incidental mobile learning, and the major challenges faced. For some MASELTOV services, feedback and progress indicators for large scale learning journeys are less apparent (e.g. TextLens, the MASELTOV tool that enables a user to take a photo of a sign and convert the image into text, potentially for future viewing or translation), while some services are explicitly educational (e.g. language lessons). However we see all of these as potentially part of an ecology of services that can support social inclusion, so all tools should include FPIs that encourage broader learning goals. In this document we draw on the Common European Framework of Reference for Languages as appropriate, and also reflect on learner perspectives (derived from WP2 and WP9 findings) to identify suitable FPIs, as well as being informed by academic literature. Furthermore, we recommend FPIs that would be suitable for the MASELTOV tools and services. The remainder of the deliverable handles the four identified key areas where mobile incidental learning particularly requires FPIs: 1. encouraging reflection 2. future goal setting 3. planning 4. social learning It should be noted that this document is a high level review, identifying significant literature and key examples of FPIs in practice. This document offers recommendations therefore in general terms. Decisions about specific FPIs to be implemented will be made in coordination with technical partners to identify which MASELTOV services and tools will support which specific feedback and progress indicators, and how they will be implemented within the system

    Technology-supported assessment

    Get PDF
    • …
    corecore