6,251 research outputs found

    Learner Fit in Scaling Up Automated Writing Evaluation

    Get PDF
    Valid evaluations of automated writing evaluation (AWE) design, development, and implementation should integrate the learners’ perspective in order to ensure the attainment of desired outcomes. This paper explores the learner fit quality of the Research Writing Tutor (RWT), an emerging AWE tool tested with L2 writers at an early stage of its development. Employing a mixed-methods approach, the authors sought to answer questions regarding the nature of learners’ interactional modifications with RWT and their perceptions of appropriateness of its feedback about the communicative effectiveness of research article Introductions discourse. The findings reveal that RWT’s move, step, and sentence-level feedback provides various opportunities for learners to engage with the revision task at a useful level of difficulty and to stimulate interaction appropriate to their individual characteristics. The authors also discuss insights about usefulness, user-friendliness, and trust as important concepts inherent to appropriateness

    Psychometrics in Practice at RCEC

    Get PDF
    A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment

    Exploring learner perceptions of and interaction behaviors using the Research Writing Tutor for research article Introduction section draft analysis

    Get PDF
    The swiftly escalating popularity of automated writing evaluation (AWE) software in recent years has compelled much study into its potential for effective pedagogical use (Chen & Cheng, 2008; Cotos, 2011; Warschauer & Ware, 2006). Research on the effectiveness of AWE tools has concentrated primarily on determining learners\u27 achieved output (Warschauer & Ware, 2006) and emphasized the attainment of linguistic goals (Escudier et al., 2011); however, in-process investigations of users\u27 interactions with and perceptions of AWE tools remain sparse (Shute, 2008; Ware, 2011). This dissertation employed a mixed-methods approach to investigate how 11 graduate student language learners interacted with and perceived the Research Writing Tutor (RWT), a web-based AWE tool which provides discourse-oriented, discipline-specific feedback on users\u27 section drafts of empirical research papers. A variety of data was collected and analyzed to capture a multidimensional depiction of learners\u27 first time interactions with the RWT; data comprised learners\u27 pre-task demographic survey responses, screen recordings of students\u27 interactions with the RWT, individual users\u27 interactional reports archived in the RWT database, instructor and researcher observations of students\u27 in-class RWT interactions, stimulated recall transcripts, and post-task survey responses. Descriptive statistics of the Likert-scale response data were calculated, and open-ended survey responses and stimulated recall transcripts were analyzed using open coding discourse analysis techniques or Systemic Functional Linguistic (SFL) appreciation resource analysis (Martin & Rose, 2003), prior to triangulating data for certain research questions. Results showed that participants found the RWT to be useful and were positive in their attitudes about helpfulness of the tool in the future if issues in feedback accuracy were improved. However, the participants\u27 also cited wavering trust in the RWT and its automated feedback, seemingly originating from learners\u27 observations of RWT feedback inaccuracies. Systematized observations of learners\u27 actual and reported RWT interaction behaviors showed both unique and patterned behaviors and strategies for using the RWT for draft revision. The participants\u27 cited learner variables, such as technological background and comfort levels using computers, personality, status as a non-native speaker of English, discipline of study, and preferences for certain forms of feedback, as impacting their experience with the RWT. Findings from this research may help enlighten potential pedagogical uses of AWE programs in the university writing classroom as well as help inform the design of AWE tasks and tools to facilitate individualized learning experiences for enhanced writing development

    Computer-Assisted Research Writing in the Disciplines

    Get PDF
    It is arguably very important for students to acquire writing skills from kindergarten through high school. In college, students must further develop their writing in order to successfully continue on to graduate school. Moreover, they have to be able to write good theses, dissertations, conference papers, journal manuscripts, and other research genres to obtain their graduate degree. However, opportunities to develop research writing skills are often limited to traditional student-advisor discussions (Pearson & Brew, 2002). Part of the problem is that graduate students are expected to be good at such writing because if they “can think well, they can write well” (Turner, 2012, p. 18). Education and academic literacy specialists oppose this assumption. They argue that advanced academic writing competence is too complex to be automatically acquired while learning about or doing research (Aitchison & Lee, 2006). Aspiring student-scholars need to practice and internalize a style of writing that conforms to discipline-specific conventions, which are norms of writing in particular disciplines such as Chemistry, Engineering, Agronomy, and Psychology. Motivated by this need, the Research Writing Tutor (RWT) was designed to assist the research writing of graduate students. RWT leverages the conventions of scientific argumentation in one of the most impactful research genres – the research article. This chapter first provides a theoretical background for research writing competence. Second, it discusses the need for technology that would facilitate the development of this competence. The description of RWT as an exemplar of such technology is then followed by a review of evaluation studies. The chapter concludes with recommendations for RWT integration into the classroom and with directions for further development of this tool

    Automatic Scaling of Text for Training Second Language Reading Comprehension

    Get PDF
    For children learning their first language, reading is one of the most effective ways to acquire new vocabulary. Studies link students who read more with larger and more complex vocabularies. For second language learners, there is a substantial barrier to reading. Even the books written for early first language readers assume a base vocabulary of nearly 7000 word families and a nuanced understanding of grammar. This project will look at ways that technology can help second language learners overcome this high barrier to entry, and the effectiveness of learning through reading for adults acquiring a foreign language. Through the implementation of Dokusha, an automatic graded reader generator for Japanese, this project will explore how advancements in natural language processing can be used to automatically simplify text for extensive reading in Japanese as a foreign language

    Research Conference 2022: Reimagining assessment: Proceedings and program

    Get PDF
    The focus of this year’s Research Conference is on the use of assessment to support improved teaching and learning. The conference is titled ‘reimagining assessment’ because we believe there is a need to transform the essential purposes of educational assessment to provide better information about the deep conceptual learning, skills, competencies, and personal attributes that teachers and schools now have as objectives for student learning and development. Reimagined assessments must now be focused on monitoring learning across this broader range of intended outcomes and provide quality information about the points individuals have reached in their long-term development

    Effects of DDL technology on genre learning

    Get PDF
    To better understand the promising effects of data-driven learning (DDL) on language learning processes and outcomes, this study explored DDL learning events enabled by the Research Writing Tutor (RWT), a web-based platform containing an English language corpus annotated to enhance rhetorical input, a concordancer that was searchable for rhetorical functions, and an automated writing evaluation engine that generated rhetorical feedback. Guided by current approaches to teaching academic writing (Lea & Street, 1998; Lillis, 2001; Swales, 2004) and the knowledge-telling/knowledge-transformation model of Bereiter and Scardamalia (1987), we set out to examine whether and how direct corpus uses afforded by RWT impact novice native and non-native writers’ genre learning and writing improvement. In an embedded mixed-methods design, written responses to DDL tasks and writing progress from first to last drafts were recorded from 23 graduate students in separate one-semester courses at a US university. The qualitative and quantitative data sets were used for within-student, within-group, and between-group comparisons—the two independent variables for the latter being course section and language background. Our findings suggest that exploiting technology-mediated corpora can foster novice writers’ exploration and application of genre conventions, enhancing development of rhetorical, formal, and procedural aspects of genre knowledge

    Decreasing the human coding burden in randomized trials with text-based outcomes via model-assisted impact analysis

    Full text link
    For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to measure only a small set of dimensions across a subsample of available texts. In this work, we present an inferential framework that can be used to increase the power of an impact assessment, given a fixed human-coding budget, by taking advantage of any ``untapped" observations -- those documents not manually scored due to time or resource constraints -- as a supplementary resource. Our approach, a methodological combination of causal inference, survey sampling methods, and machine learning, has four steps: (1) select and code a sample of documents; (2) build a machine learning model to predict the human-coded outcomes from a set of automatically extracted text features; (3) generate machine-predicted scores for all documents and use these scores to estimate treatment impacts; and (4) adjust the final impact estimates using the residual differences between human-coded and machine-predicted outcomes. As an extension to this approach, we also develop a strategy for identifying an optimal subset of documents to code in Step 1 in order to further enhance precision. Through an extensive simulation study based on data from a recent field trial in education, we show that our proposed approach can be used to reduce the scope of a human-coding effort while maintaining nominal power to detect a significant treatment impact
    • 

    corecore