6 research outputs found

    The Drunkard\u27s Search: Student Evaluation in Assessing Teaching Effectiveness

    Get PDF
    As social scientists, we understand the problem of the Drunkard\u27s Search -- the lure and perils of using easy-to-obtain but irrelevant data -- yet we are employed by institutions that are clearly searching under the lamppost for data to use in employment decisions. Researchers from various disciplines have studied and lamented the biases inherent in student course evaluations. Studies have found that these evaluations show systematic bias against women and people of color. They also may mask poor teaching practices as they are better measures of popularity than teaching effectiveness. Each year another study is released, leading to momentary hand wringing about the weakness of course evaluations as a means of assessing faculty and warning against their use in tenure and promotion decisions. Rather than adopting alternative means of evaluating faculty teaching and thinking creatively about student feedback, administrators and faculty leaders default to these surveys, claiming there is no other way to collect data on teaching. These course surveys continue to be used despite the mounting evidence that they provide not only no evidence of teaching effectiveness, but bad evidence that does damage to faculty both directly and indirectly. In an effort to raise the profile of these discussions and push for tangible change, we offer a comprehensive literature review of the existing research on student course evaluations and their biases

    How Business Students Use Online Faculty Evaluations and Business Faculty’s Perception of Their Students’ Usage

    Get PDF
    Student evaluations are an important aspect of business pedagogy. Social media-based evaluations, such as RateMyProfessors.com, empower students to evaluate faculty anonymously. A perusal of the literature indicates little to no prior research conducted on faculty perceptions of student usage of online evaluations. We posit that business students embody unique characteristics that influence their usage. We examine whether business students use RateMyProfessors.com in an ethical manner (i.e., honestly and without grade-related bias) and moderately (i.e., not only to rant or rave), whether gender differences exist in evaluations, and how confident students are in their evaluative abilities. We also posit that business faculty will understand how their students use online faculty evaluations. We summarize and discuss the empirical analysis of the hypotheses

    Professor Gender, Age, and “Hotness” in Influencing College Students’ Generation and Interpretation of Professor Ratings

    Get PDF
    Undergraduate psychology students rated expectations of a bogus professor (randomly designated a man or woman and hot versus not hot) based on an online rating and sample comments as found on RateMyProfessors.com (RMP). Five professor qualities were derived using principal components analysis (PCA): dedication, attractiveness, enhancement, fairness, and clarity. Participants rated current psychology professors on the same qualities. Current professors were divided based on gender (man or woman), age (under 35 or 35 and older), and attractiveness (at or below the median or above the median). Using multivariate analysis of covariance (MANCOVA), students expected hot professors to be more attractive but lower in clarity. They rated current professors as lowest in clarity when a man and 35 or older. Current professors were rated significantly lower in dedication, enhancement, fairness, and clarity when rated at or below the median on attractiveness. Results, with previous research, suggest numerous factors, largely out of professors’ control, influencing how students interpret and create professor ratings. Caution is therefore warranted in using online ratings to select courses or make hiring and promotion decisions

    Student evaluation of teaching, social influence dynamics, and teachers' choices: An evolutionary model

    Get PDF
    AbstractThe issue of Student Evaluation of Teaching has been explored by a large literature across many decades. However, the role of social influence factors in determining teachers' responses to a given incentive and evaluation framework has been left basically unexplored. This paper makes a first attempt in this vein by considering an evolutionary game-theoretic context where teachers face a two-stage process in which their rating depends on both students' evaluation of their course and on retrospective students' evaluation of their teaching output in view of students' performance in a related follow-up course. We find that both high effort (difficult course offered) and low effort (easy course offered) outcomes may emerge, leading either to a socially optimal outcome for teachers or not, according to cases. Moreover, there may be a potential conflict between the optimal outcome for students and for teachers. We also consider possible ways to generalize our model in future research
    corecore