61,938 research outputs found

    Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation

    Get PDF
    In 2014, Massive Open Online Courses (MOOCs) are expected to witness a phenomenal growth in student registration compared to the previous years (Lee, Stewart, & Claugar-Pop, 2014). As MOOCs continue to grow in number, there has been an increasing focus on assessment and evaluation. Because of the huge enrollments in a MOOC, it is impossible for the instructor to grade homework and evaluate each student. The enormous data generated by learners in a MOOC can be used for developing and refining automated assessment techniques. As a result, “Smart Systems” are being designed to track and predict learner behavior while completing MOOC assessments. These automated assessments for MOOCs can automatically score and provide feedback to students multiple choice questions, mathematical problems and essays. Automated assessments help teachers with grading and also support students in the learning processes. Theseassessments are prompt, consistent, and support objectivity in assessment and evaluation (Ala-Mutka, 2005). This paper reviews the emerging trends in MOOC assessments and their application in supporting student learning and achievement. The paper concludes by describing how assessment techniques in MOOCs can help to maximize learning outcomes.AbstractIn 2014, Massive Open Online Courses (MOOCs) are expected towitness a phenomenal growth in student registration compared to the previous years. As MOOCs continue to grow in number, therehas been an increasing focus on assessment and evaluation. Because of the huge enrollments in a MOOC, it is impossible for the instructor to grade homework and evaluate each student. The enormous data generated by learners in a MOOC can be used for developing and refining automated assessment techniques. As a result, "Smart Systems" are being designed to track and predict learner behavior while completing MOOC assessments. These automated assessments for MOOCs can automatically score and provide feedback to students multiple choice questions, mathematical problems and essays. Automated assessments help teachers with grading and also support students in the learning processes. These assessments are prompt, consistent, and support objectivity in assessment and evaluation (Ala-Mutka, 2005). This paper reviews the emerging trends in MOOC assessments and their application in supporting student learning and achievement. The paper concludes by describing how assessment techniques in MOOCs can help to maximize learning outcomes

    Deep Learning Approach for cognitive competency assessment in Computer Programming subject

    Get PDF
    This research examines the competencies that are essential for an lecturer or instructor to evaluate the student based on automated assessments. The competencies are the skills, knowledge, abilities and behavior that are required to perform the task given, whether in a learning or a working environment. The significance of this research is that it will assist students who are having difficulty learning a Computer Programming Language course to identify their flaws using a Deep Learning Approach. As a result, higher education institutions have a problem with assessing students based on their competency level because; they still use manual assessment to mark the assessment. In order to measure intelligence, it is necessary to identify the cluster of abilities or skills of the type in which intelligence expresses itself. This grouping of skills and abilities referred to as "competency". Then, an automated assessment is a problem-solving activity in which the student and the computer interact with no other human intervention. This review focuses on collecting different techniques that have been used. In addition, the review finding shows the main gap that exists within the context of the studied areas, which contributes to our key research topic of interest

    Assessment basics: How to implement an effective student learning assessment process

    Get PDF
    Assessment Basics is a workshop focused on understanding the purpose behind student learning assessment, strategies to guide curricular integration, and techniques to use data for program improvement. Participants will be involved in leadership techniques to transfer into their own context. Experience with assessment processes will enable academic chairpersons with skills to guide faculty involvement and promote self-review of data that leads to continual program improvement. Assessments in higher education are crucial in providing indicators of the educational effectiveness and quality of its educational offerings. Assessments help students, instructors, and administrators answer various questions about student development, the value of specific courses, and the credibility of an institution. This session will enable participants to understand the foundation of learning expected in their program. Strategies to involve faculty in defining programmatic learning expectations reflecting course outcomes lead to the construction a curricular structure designed to guide student learning. Processes shared can guide assessment of individual student learning within courses to provide achievement data that can document program effectiveness and guide improvement decisions. Participants experience assessment tools that provide valid data to measure how well a program prepares students for educational and career-relevant learning objectives, as well as confirm applied transfer of general education learning expectations. Examples of automated data collection and reporting provides insights toward an integrated culture of assessment and continual improvement. Techniques experienced provides students with evidence of their development across courses, provides faculty with actionable feedback over time, and provides academic leaders with insight into how well students are performing against program learning objectives that can inform remediation efforts needed to address development gaps and improve educational quality. The session will be recorded, edited, and posted with the associated document to enable participants’ enhanced learning, as well as the opportunity to transfer the suggestions into each academic chairperson’s context

    The use of computer-based assessments in a field biology module

    Get PDF
    Formative computer-based assessments (CBAs) for self-instruction were introduced into a Year-2 field biology module. These CBAs were provided in ‘tutorial’ mode where each question had context-related diagnostic feedback and tutorial pages, and a self-test mode where the same CBA returned only a score. The summative assessments remained unchanged and consisted of an unseen CBA and written reports of field investigations. When compared with the previous three year-cohorts, the mean score for the summative CBA increased after the introduction of formative CBAs, whereas mean scores for written reports did not change. It is suggested that the increase in summative CBA mean score reflects the effectiveness of the formative CBAs in widening the students’ knowledge base. Evaluation of all assessments using an Assessment Experience Questionnaire indicated that they satisfied the ‘11 conditions under which assessment supports student learning’. Additionally, evidence is presented that the formative CBAs enhanced self-regulated student learning

    Diversity in numbers: Connecting students to their world through quantitative skills

    Get PDF
    BACKGROUND Student underperformance on quantitative skills (QS, e.g. numeracy, statistics) is an enduring and increasing challenge in the tertiary education sector globally.  A review of science programs across 13 Australian universities suggests QS teaching is often focused on one 100-level units and between 1-3 units later in the degree (Matthews et al., 2012), providing little opportunity for vertical QS development. AIMS The Diversity in Numbers (DiN) project – Australian Council of Deans of Science (ACDS) funded – evaluates an alternative curricular model for numeracy skills development: scaffolded, course-wide implementation of digital numeracy modules with embedded interactive content and rich automated feedback to maximise learning. DESCRIPTION OF INTERVENTION Four pilot modules have been developed, each focusing on a core QS concept (e.g. statistical testing, unit conversions) and framed around a published article relevant to unit content, to expand student awareness of numbers as a tool to explore global diversity. This lens is central to the projects’ intention of addressing the ongoing lack of diversity among STEM graduates and within the STEM workforce. RESULTS AND CONCLUSIONS Preliminary data will explore the impact of DiN modules on student engagement (through student feedback and Learning Management System analytics), numeracy anxiety (through pre- and post-module anxiety assessments) and learning (through performance on numeracy-related assessments). REFERENCES Matthews, K. E., Belward, S., Coady, C., Rylands, L., Simbag, V., Adams, P., Peleaz, N., Thompson, K., Parry, M., & Tariq, V. (2012). The state of quantitative skills in undergraduate science education: findings from an Australian study. Australian Government, Office for Learning and Teaching

    Alternative model for the administration and analysis of research-based assessments

    Full text link
    Research-based assessments represent a valuable tool for both instructors and researchers interested in improving undergraduate physics education. However, the historical model for disseminating and propagating conceptual and attitudinal assessments developed by the physics education research (PER) community has not resulted in widespread adoption of these assessments within the broader community of physics instructors. Within this historical model, assessment developers create high quality, validated assessments, make them available for a wide range of instructors to use, and provide minimal (if any) support to assist with administration or analysis of the results. Here, we present and discuss an alternative model for assessment dissemination, which is characterized by centralized data collection and analysis. This model provides a greater degree of support for both researchers and instructors in order to more explicitly support adoption of research-based assessments. Specifically, we describe our experiences developing a centralized, automated system for an attitudinal assessment we previously created to examine students' epistemologies and expectations about experimental physics. This system provides a proof-of-concept that we use to discuss the advantages associated with centralized administration and data collection for research-based assessments in PER. We also discuss the challenges that we encountered while developing, maintaining, and automating this system. Ultimately, we argue that centralized administration and data collection for standardized assessments is a viable and potentially advantageous alternative to the default model characterized by decentralized administration and analysis. Moreover, with the help of online administration and automation, this model can support the long-term sustainability of centralized assessment systems.Comment: 7 pages, 1 figure, accepted in Phys. Rev. PE

    Psychometrics in Practice at RCEC

    Get PDF
    A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment

    Assessments in Mathematics, undergraduate degree

    Full text link
    In the sequel, we question the validity of multiple choice questionnaires for undergraduate level math courses. Our study is based on courses given in major French universities, to numerous audiences
    • 

    corecore