95 research outputs found

    How Faculty Attitudes and Expectations toward Student Nationality Affect Writing Assessment

    Get PDF
    Earlier research on assessment suggests that even when Native English Speaker (NSE) and Non-Native English Speaker (NNES) writers make similar errors, faculty tend to assess the NNES writers more harshly. Studies indicate that evaluators may be particularly severe when grading NNES writers holistically. In an effort to provide more recent data on how faculty perceive student writers based on their nationalities, researchers at two medium-sized Midwestern universities surveyed and conducted interviews with faculty to determine if such discrepancies continue to exist between assessments of international and American writers, to identify what preconceptions faculty may have regarding international writers, and to explore how these notions may affect their assessment of such writers. Results indicate that while faculty continue to rate international writers lower when scoring analytically, they consistently evaluate those same writers higher when scoring holistically

    And then a Miracle Occurs: The use of Computers to Score Student Writing

    No full text
    The machine scoring of student writing stands as one of the hot topics in writing assessment. Companies promote these products as time- and money-saving. However, the salient question remains: Is this technology appropriate for use in the English as a Second Language writing (SLW) classroom? Administrators and second language writing professionals often seem be at odds when it comes to the use of such programs. Proponents typically express that electronic grading is of great benefit, mainly because it facilitates scoring large numbers of student essays in a short time. Scoring efficiency appeals mainly to administrators searching for cost effective ways to provide classroom writing instruction. Equally appealing to administrators is the notion that class size can be increased as the burden of grading is removed from the teacher. However, many second language writing professionals are dismayed by the notion of a computer scoring or responding to student writing. Although it is important that practitioners not rely solely on their initial response, it is natural that they express concern. However, as researchers, we recognize the need to thoroughly examine the topic, weighing both positive and negative outcomes of the use of such platforms.This issue needs to be studied from multiple perspectives so that teachers are informed about using computers to assess student writing. In this paper, the views of educators, administrators, and developers of artificial intelligence are examined with respect to the use of machines to score student writing. These programs are then situated in the context of writing assessment theory and their use critiqued in terms of pedagogical value. The paper concludes with an exploration of both the consequences and potential benefits of using these systems in second language writing classrooms as well as suggestions to help second language writing professionals work with administrators pushing for this type of assessment for instructional purposes

    All Teachers are Language Teachers: How Language Acquisition and Writing Assessment Affect Student Success

    No full text
    Assessment is a perennial issue in the teaching of writing. And although many teachers may dread it, we cannot disregard the importance of reliable and ethical assessment. Informed assessment of student writing remains an important component of the classroom and of a teacher’s repertoire. Further complicating writing assessments are issues involving language acquisition, which English language learners bring to the classroom. Second language writers, however, are not alone in their need for rhetorical and language skills. Both second language writers and native speakers of English struggle with rhetorical issues (where there may be overlap) and language issues (where there are differences). As students progress from grade school to high school and college, they encounter more complex discipline-specific genres ridden with difficult-to-process language. These heavy linguistic demands sometimes seem to obstruct progress, and students may struggle with tasks, not because of cognition, but because of language. As de Oliveria (2016) reminds us, “All students are language learners. All teachers are language teachers.” This presentation explores that theme and reviews principles for promoting student success in writing through teaching and assessment regardless of their language status

    The Quagmire of Assessment for Placement Talking out of both Sides of our Mouths

    No full text
    My interest in writing assessment stems from perceived injustices I witnessed while teaching English at a small western Pennsylvania community college. The philosophy at the institution in 1989, especially for basic and English as a second language ESL writers, was that students needed to learn grammar before they could ever hope to write one of the courses I was assigned to teach developmental English I - especially stressed grammar spelling vocabulary and short paragraph writing the first class meeting of developmental English I troubled me, for many of the students complained bitterly about their placement in what they referred to as bonehead or dummy English. They wanted to know how they had gotten there; I was unable to tell them but, I promised to investigate

    The Promise of Directed Self-Placement for Second Language Writers

    No full text
    Evaluation is far from being a neutral process. In recent years, tests have commanded increasing influence, which in implications for both individuals and society (Crusan, 2010b). One purpose for testing - writing placement or where to put students (Yancey, 1999, p. 485) - has presented particular challenges, especially in writing programs at the college level. Further, because the first-year experience in college so clearly defines the academic success or failure of students (di Gennaro, 2008), placement deserves teachers\u27 attention. Scholars have long recognized the challenges of writing placement, both in LI and L2, calling it the knottiest of our assessment problems (White, 2008, p. 141; see Crusan, 2002, 2006, 2010a, 2010b; di Gennaro, 2006, 2008; Huot, 1994; Hamp-Lyons, 2002, 2011; Haswell, 1998; O\u27Neill, Moore, & Huot, 2009; Royer & Gilles, 2003; Weigle, 2002; White, 2008; Yancey, 1999). Placement into composition courses has been a perennially thorny issue for students whose first language is not English but who are matriculated students at American universities. At the heart of this conundrum is the question of method. How can we evaluate students? What tools can we use to evaluate them? Though these questions have been asked and answered countless times by countless writing programs, some answers have been more acceptable and successful than others. In this article, I focus on one specific placement method: directed self-placement (DSP) , its varieties, advantages, and disadvantages. The strength of a writing program often lies in its assessment techniques (Crusan, 2010a, p. 30), so I strongly suggest that writing programs consider DSP as one option for placement of second lang

    Tanslingual Pedagogy and Writing Assessment: A Harmonious Union

    No full text
    Discussing assessment in the teaching of writing within a translingual framework, the third panelist specifically addresses the issue of grades and questions the appropriateness and the practicality of applying principles of translingual pedagogy to the assessment of student writing

    Machine Scoring of Student Essays: Truth and Consequences (review)

    No full text
    For some time, it has been claimed that a divide exists between commercial test developers and the academic community (White, 1990, 1996). Nowhere is this division more apparent than in regard to the machine scoring of essays [a.k.a. automated essay scoring (AES) or automated writing evaluation (AWE)]. In 2003, Burstein and Shermis published an edited collection on automated essay scoring (Automated essay scoring: A crossdisciplinary perspective), examining psychometric issues and explaining in detail how automated essay scorers such as e-rater®, IntelliMetricTM, and Intelligent Essay Assessor (IEA) work. Subsequently, Ericsson and Haswell published Machine scoring of student essays: Truth and consequences in 2006, touted by some as the academy’s answer to the Burstein and Shermis contribution. Both collections are fervent in their positions and serve as important equalizers in the discussion of using technology to assess writing

    Assessment in the Second Language Writing Classroom

    No full text
    Assessment in the Second Language Writing Classroom is a teacher and prospective teacher-friendly book, uncomplicated by the language of statistics. The book is for those who teach and assess second language writing in several different contexts: the IEP, the developmental writing classroom, and the sheltered composition classroom. In addition, teachers who experience a mixed population or teach cross-cultural composition will find the book a valuable resource. Other books have thoroughly covered the theoretical aspects of writing assessment, but none have focused as heavily as this book does on pragmatic classroom aspects of writing assessment. Further, no book to date has included an in-depth examination of the machine scoring of writing and its effects on second language writers. Crusan not only makes a compelling case for becoming knowledgeable about L2 writing assessment but offers the means to do so. Her highly accessible, thought-provoking presentation of the conceptual and practical dimensions of writing assessment, both for the classroom and on a larger scale, promises to engage readers who have previously found the technical detail of other works on assessment off-putting, as well as those who have had no previous exposure to the study of assessment at all.https://corescholar.libraries.wright.edu/books/1164/thumbnail.jp

    The Promise of Directed Self-Placement for Second Language Writers

    No full text
    Evaluation is far from being a neutral process. In recent years, tests have commanded increasing influence, which in implications for both individuals and society (Crusan, 2010b). One purpose for testing - writing placement or where to put students (Yancey, 1999, p. 485) - has presented particular challenges, especially in writing programs at the college level. Further, because the first-year experience in college so clearly defines the academic success or failure of students (di Gennaro, 2008), placement deserves teachers\u27 attention. Scholars have long recognized the challenges of writing placement, both in LI and L2, calling it the knottiest of our assessment problems (White, 2008, p. 141; see Crusan, 2002, 2006, 2010a, 2010b; di Gennaro, 2006, 2008; Huot, 1994; Hamp-Lyons, 2002, 2011; Haswell, 1998; O\u27Neill, Moore, & Huot, 2009; Royer & Gilles, 2003; Weigle, 2002; White, 2008; Yancey, 1999). Placement into composition courses has been a perennially thorny issue for students whose first language is not English but who are matriculated students at American universities. At the heart of this conundrum is the question of method. How can we evaluate students? What tools can we use to evaluate them? Though these questions have been asked and answered countless times by countless writing programs, some answers have been more acceptable and successful than others. In this article, I focus on one specific placement method: directed self-placement (DSP) , its varieties, advantages, and disadvantages. The strength of a writing program often lies in its assessment techniques (Crusan, 2010a, p. 30), so I strongly suggest that writing programs consider DSP as one option for placement of second lang
    corecore