22,161 research outputs found
Recommended from our members
Teaching linguistics gotta catch ’em all: Skills grading in undergraduate linguistics
Dissatisfied with traditional grading, we developed a grading system to directly assess whether students have mastered course material. We identified the set of skills students need to master in a course and provided multiple opportunities for students to demonstrate mastery of each skill. We describe in detail how we implemented the system for two undergraduate courses, Introductory Phonetics and Phonology I. Our goals were to decrease student stress, increase student learning and make students’ study efforts more effective, increase students’ metacognitive awareness, promote a growth mindset, encourage students to aim for mastery rather than partial credit, be fairer to students facing structural and institutional disadvantages, reduce our time spent on grading, and facilitate complying with new accreditation requirements. Our own reflections and student feedback indicate that many of these goals were met
Recommended from our members
Scholarly insight Autumn 2017:a Data wrangler perspective
As the OU is going through several fundamental changes, it is important that strategic decisions made by Faculties and senior management are informed by evidence-based research and insights. One way how Data Wranglers provide insights of longitudinal development and performance of OU modules is the Key Metric Report 2017. A particular new element is that data can now also be unpacked and visualised on a Nation-level. As evidenced by the Nation-level reporting, there are substantial variations of success across the four Nations, and we hope that our interactive dashboards allow OU staff to unpack the underlying data.
The second way Data Wranglers provide insight to Faculties and Units is through the Scholarly insight report series. Building on the previous two reports whereby we reported on substantial variation and inconsistencies in learning designs and assessment practices within qualifications across the OU, in this Scholarly insight Autumn 2017 report we address four big pedagogical questions that were framed and co-constructed together with the Faculties and LTI units. Many Faculties and colleagues have reacted positively on our Scholarly insight Spring 2017 report, whereby for the first time we were able to show empirically that students experienced substantial variations in success within 12 large OU qualifications. As evidenced in our previous report, 55% of variation in students’ success over time was explained by OU institutional factors (i.e., how students were assessed within their respective module; how students were able to effectively transition from one learning design of one module to the next one), rather than students’ characteristics, engagement and behaviour.
We have received several queries and questions from Faculties and Units about how to better understand these students’ journeys, and how qualifications and module designs could be better aligned within their respective qualification(s). As these are complex conceptual and Big Pedagogy questions, in Chapter 1 we continued these complex analyses by looking at the transitional processes of the first two modules that OU students take, and how well aligned these modules and qualification paths are. In Chapter 2, we explored the more fine-grained, qualitative, and lived experiences of 19 students across a range of qualifications to understand how OU grading practices and (in)consistencies of assessment and feedback influenced their affect, behaviour, and cognition. In addition to building on previous topics, we introduced two new Scholarly insights in Chapter 3 and Chapter 4. As the OU is increasingly using learning analytics to support our staff and students, in Chapter 3 we analysed the impact of giving Predictive Learning Analytics to over 500 Associate Lecturers across 31 modules on student retention. Finally, in Chapter 4 we explored the impact of first presentations of new modules on pass rates and satisfaction, whereby we were able to bust another myth that may have profound implications for Student First Transformation.
Working organically in various Faculty sub-group meetings and LTI Units and in a google doc with various key stakeholders in the Faculties , we hope that our Scholarly insights can help to inform our staff, but also spark some ideas how to further improve our module designs and qualification pathways. Of course we are keen to hear what other topics require Scholarly insight
Educational Reform and Disadvantaged Students: Are They Better Off or Worse Off?
This paper analyzes the effects of increased academic standards on both average achievement levels and on equality of opportunity. The five policies evaluated are: (1) universal curriculum-Based External Exit Exam Systems, (2) voluntary curriculum-based external exit exam systems with partial coverage such as New York State Regents exams in 1992, (3) state minimum competency graduation tests, (4) state defined minimums for the total number of courses students must take and pass to get a high school diploma and (5) state defined minimums for the number of academic courses necessary to get a diploma. We use international data to evaluate the effects of CBEEES. High school graduation standards differ a lot across states in the U.S. This allowed us to measure policy effects on student achievement and labor market success after high school by comparing states in a multiple regression framework.
Our analysis shows that only two of the policies examined deliver on increasing everyone’s achievement and also reduce achievement gaps: universal CBEEES and higher academic course graduation requirements. Other policies were less successful in raising achievement and enhancing equality of opportunity
Variability in the Effectiveness of Psychological Interventions based on Machine Learning in STEM Education
This manuscript presents a framework to investigate the variability in the effectiveness of psychological interventions supported by Machine Learning (ML) based early-warning systems (EWS) in science, technology, engineering, and mathematics education. It emphasizes the importance of investigating the resulting variability and suggests that effective EWS cannot be designed without a deeper understanding of the variability. The framework uses an ML-based model to predict students’ academic performance early in the semester for a Sophomore-level Computer Science course at a public university in the United States. The students were given psychological interventions by sending their end-of-term performance forecast thrice during the semester. A randomized control trial was designed to determine whether interventions made an overall positive impact on students’ academic performance and whether there was variability in its impact. Results suggested that although interventions improved academic performance, they were not equally effective at different performance levels and that students at the same level reacted differently to these interventions
Predicting underperformance from students in upper level engineering courses
Recent research in academic analytics has focused on predicting student performance within, and sometimes across courses for the purpose of informing early interventions. While such an endeavor has obvious merit, modern contructivist learning theory expresses an importance on more individualized support for students. In keeping with this theory, this research describes the development of a model that predicts student performance within a course, relative to their past academic performance. This study is done using the minimum sources of data possible while still developing an accurate model. Useful logistic models using data from the institution’s student information system, learning management system, and grade books some useful findings are developed. While each source of data was able to predict student success independently, the most accurate model contained data from both the grade book and student information system. These models were able to accurately identify students on track to underperform relative to their own cumulative grade point averages within the first seven weeks of a course, aligning with the studied institution’s existing requirements for a manual early intervention system
Strengthening Incentives for Student Effort and Learning: Michigan’s Merit Award Program?
[Excerpt] One of the primary reasons American students learn a good deal less during secondary school than students in other industrialized nations is that they devote less time and intellectual energy to the task.1 Accountability systems designed to get teachers to try harder and set higher standards will not produce more student learning if [as one high school teacher put it] “students are sitting back in their desks, arms crossed, waiting for their teachers to make them smart (Zoch, 1998, p. 70).”
Learning is not a passive act; it requires the time and active involvement of the learner. In a classroom with 1 teacher and 25 students, there are 25 learning hours spent for every hour of teaching time. Learning takes work and that work is generally not going to be as much fun as hanging out with friends or watching TV. If students cannot be motivated to give up some time socializing or watching TV so that they can learn difficult material and develop high level skills, the time and talents of teachers will be wasted
The Role of End-of-Course Exams and Minimum Competency Exams in Standards-Based Reforms
[Excerpt] Educational reformers and most of the American public believe that most teachers ask too little of their pupils. These low expectations, they believe, result in watered down curricula and a tolerance of mediocre teaching and inappropriate student behavior. The result is that the prophecy of low achievement becomes self-fulfilling. Although research has shown that learning gains are substantially larger when students take more demanding courses2, only a minority of students enroll in these courses. There are several reasons for this. Guidance counselors in many schools allow only a select few into the most challenging courses. While most schools give students and parents the authority to overturn counselor recommendations, many families are unaware they have that power or are intimidated by the counselor’s prediction of failure in the tougher class. As one student put it: “African-American parents, they settle for less, not knowing they can get more for their students.
Predicting NCLEX-RN performance : an exploration of student demographics, pre-program factors, and nursing program factors.
Nursing programs are experiencing a decline in National Council Licensure Examination for Registered Nurses (NCLEX-RN) pass rates among graduates. While researchers have attempted to identify predictors of performance on the NCLEX-RN, identification of predictors remains elusive. Although the literature is replete with studies exploring NCLEX-RN predictors, prediction under the new 2013 NCLEX test plan and passing standards is not well established. Considering the ever-evolving diversity in students, combined with recent changes in the NCLEX-RN, further exploration of predictors of performance is warranted. Using a correlational design, the study sought to identify the predictors of NCLEX-RN performance for Bachelors of Science in Nursing (BSN) graduates. The focal research question for this study was, “Do baccalaureate nursing students’ academic outcomes predict NCLEX-RN performance?” To answer this primary question, the researcher conducted a retrospective review of student records at a single pre-licensure BSN program. A binary logistic regression was performed to model the relationship between academic outcomes and NCLEX-RN outcomes. The analysis revealed a combination of nursing program academic outcomes predicted NCLEX-RN performance. Most particularly, the use of the Adult Health course exam average, score on the Adult Health ATI exam, ATI Comprehensive Predictor performance, and graduation GPA can predict NCLEX-RN outcomes, when controlling for student profile characteristics and academic factors. This study suggests nursing exam scores and standardized test scores can aid in predicting NCLEX-RN performance for BSN graduates. Findings from this study can provide nursing educators a foundation for understanding the factors associated with NCLEX-RN performance and offer a framework for identifying students who are at-risk for NCLEX-RN failure. Moreover, study findings can provide insight into the additional needs of students in preparing for NCLEX-RN and guide educators in developing early intervention programs for high-risk students. Given the national decline in NCLEX-RN pass rates, early identification of at-risk students and implementation of interventions targeting high-risk students can offer a solution for reducing the number of graduates unprepared for the NCLEX-RN and alleviate the burden associated with failure
Standards-based grading at the secondary level: A review of literature
Grading systems have become a topic of interest at the secondary level. Even though the shift to standards-based grading is a daunting task for districts, post-secondary educational institutions are taking on grading reform at even higher educational levels. It is challenging for secondary teachers to narrow down the key elements of grading and prepare for the shift in mindset needed for a standards-based system. Research shows beneficial elements for learning after implementing a full standards-based system, but transforming a district’s grading system is a major undertaking. While there are benefits for all stakeholders involved when reforming beliefs about grading practices, many districts recognize several elements that make this implementation such a difficult shift. The review examines standards-based systems at the secondary and post-secondary levels to find key elements of a standards-based system, benefits, drawbacks, and to analyze implementation at both levels of education
- …