36 research outputs found
Analyzing the Gender Gap on an Entrance Exam for Mathematically Talented Students
We investigate the qualifying entrance exam for the University of Minnesota Talented Youth Mathematics Program (UMTYMP), a five-year accelerated program covering high school- and undergraduate-level mathematics. The exam is used to assess the computational, numerical reasoning, and geometric skills of hundreds of fifth-, sixth-, and seventh-grade students annually. It has accurately identified qualified students in past years, but female participants consistently have had lower overall scores. Based on our belief that they are equally well qualified, in 2011 we began an extensive investigation into the structure and content of the exam to determine the possible sources for these differences. After gathering and analyzing data, we made relatively modest changes in 2012 which essentially eliminated the gender bias on one version of the entrance exam, increasing the percentage of females who qualified. The other unmodified versions in 2012 exhibited the typical gender difference from previous years. We continue to analyze the possible reasons for the gender differences while monitoring the overall student performance upon entering the Program
Increasing Statistical Literacy by Exploiting Lexical Ambiguity of Technical Terms
Instructional inattention to language poses a barrier for students in entry-level science courses, in part because students may perceive a subject as difficult solely based on the lack of understanding of the vocabulary. In addition, the technical use of terms that have different everyday meanings may cause students to misinterpret statements made by instructors, leading to an incomplete or incorrect understanding of the domain. Terms that have different technical and everyday meanings are said to have lexical ambiguity and statistics, as a discipline, has many lexically ambiguous terms. This paper presents a cyclic process for designing activities to address lexical ambiguity in statistics. In addition, it describes three short activities aimed to have high impact on student learning associated with two different lexically ambiguous words or word pairs in statistics. Preliminary student-level data are used to assess the efficacy of the activities, and future directions for development of activities and research about lexical ambiguity in statistics in particular and STEM in general are discussed
Recommended from our members
Using Lexical Analysis Software to Assess Student Writing in Statistics
Meaningful assessments that reveal student thinking are vital to the success of addressing the GAISE recommendation: use assessments to improve and evaluate student learning. Constructed-response questions, also known as open-response or short answer questions, in which students must write an answer in their own words, have been shown to better reveal students' understanding than multiple-choice questions, but they are much more time consuming to grade for classroom use or code for research purposes. This paper describes and illustrates the use of two different software packages to analyze open-response data collected from undergraduate students’ writing. The analysis and results produced by the two packages are contrasted with each other and with the results obtained from hand coding of the same data sets. The article concludes with a discussion of the advantages and limitations of the analysis options for statistics education research
Recommended from our members
Using Lexical Analysis Software to Assess Student Writing in Statistics
Meaningful assessments that reveal student thinking are vital to the success of addressing the GAISE recommendation: use assessments to improve and evaluate student learning. Constructed-response questions, also known as open-response or short answer questions, in which students must write an answer in their own words, have been shown to better reveal students' understanding than multiple-choice questions, but they are much more time consuming to grade for classroom use or code for research purposes. This paper describes and illustrates the use of two different software packages to analyze open-response data collected from undergraduate students’ writing. The analysis and results produced by the two packages are contrasted with each other and with the results obtained from hand coding of the same data sets. The article concludes with a discussion of the advantages and limitations of the analysis options for statistics education research