6,251 research outputs found
Learner Fit in Scaling Up Automated Writing Evaluation
Valid evaluations of automated writing evaluation (AWE) design, development, and implementation should integrate the learnersâ perspective in order to ensure the attainment of desired outcomes. This paper explores the learner fit quality of the Research Writing Tutor (RWT), an emerging AWE tool tested with L2 writers at an early stage of its development. Employing a mixed-methods approach, the authors sought to answer questions regarding the nature of learnersâ interactional modifications with RWT and their perceptions of appropriateness of its feedback about the communicative effectiveness of research article Introductions discourse. The findings reveal that RWTâs move, step, and sentence-level feedback provides various opportunities for learners to engage with the revision task at a useful level of difficulty and to stimulate interaction appropriate to their individual characteristics. The authors also discuss insights about usefulness, user-friendliness, and trust as important concepts inherent to appropriateness
Psychometrics in Practice at RCEC
A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud
All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment
Exploring learner perceptions of and interaction behaviors using the Research Writing Tutor for research article Introduction section draft analysis
The swiftly escalating popularity of automated writing evaluation (AWE) software in recent years has compelled much study into its potential for effective pedagogical use (Chen & Cheng, 2008; Cotos, 2011; Warschauer & Ware, 2006). Research on the effectiveness of AWE tools has concentrated primarily on determining learners\u27 achieved output (Warschauer & Ware, 2006) and emphasized the attainment of linguistic goals (Escudier et al., 2011); however, in-process investigations of users\u27 interactions with and perceptions of AWE tools remain sparse (Shute, 2008; Ware, 2011). This dissertation employed a mixed-methods approach to investigate how 11 graduate student language learners interacted with and perceived the Research Writing Tutor (RWT), a web-based AWE tool which provides discourse-oriented, discipline-specific feedback on users\u27 section drafts of empirical research papers. A variety of data was collected and analyzed to capture a multidimensional depiction of learners\u27 first time interactions with the RWT; data comprised learners\u27 pre-task demographic survey responses, screen recordings of students\u27 interactions with the RWT, individual users\u27 interactional reports archived in the RWT database, instructor and researcher observations of students\u27 in-class RWT interactions, stimulated recall transcripts, and post-task survey responses. Descriptive statistics of the Likert-scale response data were calculated, and open-ended survey responses and stimulated recall transcripts were analyzed using open coding discourse analysis techniques or Systemic Functional Linguistic (SFL) appreciation resource analysis (Martin & Rose, 2003), prior to triangulating data for certain research questions. Results showed that participants found the RWT to be useful and were positive in their attitudes about helpfulness of the tool in the future if issues in feedback accuracy were improved. However, the participants\u27 also cited wavering trust in the RWT and its automated feedback, seemingly originating from learners\u27 observations of RWT feedback inaccuracies. Systematized observations of learners\u27 actual and reported RWT interaction behaviors showed both unique and patterned behaviors and strategies for using the RWT for draft revision. The participants\u27 cited learner variables, such as technological background and comfort levels using computers, personality, status as a non-native speaker of English, discipline of study, and preferences for certain forms of feedback, as impacting their experience with the RWT. Findings from this research may help enlighten potential pedagogical uses of AWE programs in the university writing classroom as well as help inform the design of AWE tasks and tools to facilitate individualized learning experiences for enhanced writing development
Recommended from our members
How to design for persistence and retention in MOOCs?
Design of educational interventions is typically carried out following a design cycle involving phases of investigation, conceptualization, prototyping, implementation, execution and evaluation. This cycle can be applied at different levels of granularity e.g. learning activity, module, course or programme.
In this paper we consider an aspect of learner behavior that can be critical to the success of many MOOCs i.e. their persistence to study, and the related theme of learner retention. We reflect on the impact that consideration of these can have on design decisions at different stages in the design cycle with the aim of en-hancing MOOC design in relation to learner persistence and retention, with particular attention to the European context
Computer-Assisted Research Writing in the Disciplines
It is arguably very important for students to acquire writing skills from kindergarten through high school. In college, students must further develop their writing in order to successfully continue on to graduate school. Moreover, they have to be able to write good theses, dissertations, conference papers, journal manuscripts, and other research genres to obtain their graduate degree. However, opportunities to develop research writing skills are often limited to traditional student-advisor discussions (Pearson & Brew, 2002). Part of the problem is that graduate students are expected to be good at such writing because if they âcan think well, they can write wellâ (Turner, 2012, p. 18). Education and academic literacy specialists oppose this assumption. They argue that advanced academic writing competence is too complex to be automatically acquired while learning about or doing research (Aitchison & Lee, 2006). Aspiring student-scholars need to practice and internalize a style of writing that conforms to discipline-specific conventions, which are norms of writing in particular disciplines such as Chemistry, Engineering, Agronomy, and Psychology. Motivated by this need, the Research Writing Tutor (RWT) was designed to assist the research writing of graduate students. RWT leverages the conventions of scientific argumentation in one of the most impactful research genres â the research article. This chapter first provides a theoretical background for research writing competence. Second, it discusses the need for technology that would facilitate the development of this competence. The description of RWT as an exemplar of such technology is then followed by a review of evaluation studies. The chapter concludes with recommendations for RWT integration into the classroom and with directions for further development of this tool
Automatic Scaling of Text for Training Second Language Reading Comprehension
For children learning their first language, reading is one of the most effective ways to acquire new vocabulary. Studies link students who read more with larger and more complex vocabularies. For second language learners, there is a substantial barrier to reading. Even the books written for early first language readers assume a base vocabulary of nearly 7000 word families and a nuanced understanding of grammar. This project will look at ways that technology can help second language learners overcome this high barrier to entry, and the effectiveness of learning through reading for adults acquiring a foreign language. Through the implementation of Dokusha, an automatic graded reader generator for Japanese, this project will explore how advancements in natural language processing can be used to automatically simplify text for extensive reading in Japanese as a foreign language
Research Conference 2022: Reimagining assessment: Proceedings and program
The focus of this yearâs Research Conference is on the use of assessment to support improved teaching and learning. The conference is titled âreimagining assessmentâ because we believe there is a need to transform the essential purposes of educational assessment to provide better information about the deep conceptual learning, skills, competencies, and personal attributes that teachers and schools now have as objectives for student learning and development. Reimagined assessments must now be focused on monitoring learning across this broader range of intended outcomes and provide quality information about the points individuals have reached in their long-term development
Effects of DDL technology on genre learning
To better understand the promising effects of data-driven learning (DDL) on language learning processes and outcomes, this study explored DDL learning events enabled by the Research Writing Tutor (RWT), a web-based platform containing an English language corpus annotated to enhance rhetorical input, a concordancer that was searchable for rhetorical functions, and an automated writing evaluation engine that generated rhetorical feedback. Guided by current approaches to teaching academic writing (Lea & Street, 1998; Lillis, 2001; Swales, 2004) and the knowledge-telling/knowledge-transformation model of Bereiter and Scardamalia (1987), we set out to examine whether and how direct corpus uses afforded by RWT impact novice native and non-native writersâ genre learning and writing improvement. In an embedded mixed-methods design, written responses to DDL tasks and writing progress from first to last drafts were recorded from 23 graduate students in separate one-semester courses at a US university. The qualitative and quantitative data sets were used for within-student, within-group, and between-group comparisonsâthe two independent variables for the latter being course section and language background. Our findings suggest that exploiting technology-mediated corpora can foster novice writersâ exploration and application of genre conventions, enhancing development of rhetorical, formal, and procedural aspects of genre knowledge
Recommended from our members
Scholarly insight Spring 2018: a Data wrangler perspective
In the movie classic Back to the Future a young Michael J. Fox is able to explore the past by a time machine developed by the slightly bizarre but exquisite Dr Brown. Unexpectedly by some small intervention the course of history was changed a bit along Foxâs adventures. In this fourth Scholarly Insight Report we have explored two innovative approaches to learn from OU data of the past, which hopefully in the future will make a large difference in how we support our students and design and implement our teaching and learning practices. In Chapter 1, we provide an in-depth analysis of 50 thousands comments expressed by students through the Student Experience on a Module (SEAM) questionnaire. By analysing over 2.5 million words using big data approaches, our Scholarly insights indicate that not all student voices are heard. Furthermore, our big data analysis indicate useful potential insights to explore how student voices change over time, and for which particular modules emergent themes might arise.
In Chapter 2 we provide our second innovative approach of a proof-of-concept of qualification path way using graph approaches. By exploring existing data of one qualification (i.e., Psychology), we show that students make a range of pathway choices during their qualification, some of which are more successful than others. As highlighted in our previous Scholarly Insight Reports, getting data from a qualification perspective within the OU is a difficult and challenging process, and the proof-of-concept provided in Chapter 2 might provide a way forward to better understand and support the complex choices our students make.
In Chapter 3, we provide a slightly more practically-oriented and perhaps down to earth approach focussing on the lessons-learned with Analytics4Action. Over the last four years nearly a hundred modules have worked with more active use of data and insights into module presentation to support their students. In Chapter 3 several good-practices are described by the LTI/TEL learning design team, as well as three innovative case-studies which we hope will inspire you to try something new as well.
Working organically in various Faculty sub-group meetings and LTI Units and in a google doc with various key stakeholders in the Faculties, we hope that our Scholarly insights can help to inform our staff, but also spark some ideas how to further improve our module designs and qualification pathways. Of course we are keen to hear what other topics require Scholarly insight. We hope that you see some potential in the two innovative approaches, and perhaps you might want to try some new ideas in your module. While a time machine has not really been invented yet, with the increasing rich and fine-grained data about our students and our learning practices we are getting closer to understand what really drives our students
Decreasing the human coding burden in randomized trials with text-based outcomes via model-assisted impact analysis
For randomized trials that use text as an outcome, traditional approaches for
assessing treatment impact require that each document first be manually coded
for constructs of interest by trained human raters. This process, the current
standard, is both time-consuming and limiting: even the largest human coding
efforts are typically constrained to measure only a small set of dimensions
across a subsample of available texts. In this work, we present an inferential
framework that can be used to increase the power of an impact assessment, given
a fixed human-coding budget, by taking advantage of any ``untapped"
observations -- those documents not manually scored due to time or resource
constraints -- as a supplementary resource. Our approach, a methodological
combination of causal inference, survey sampling methods, and machine learning,
has four steps: (1) select and code a sample of documents; (2) build a machine
learning model to predict the human-coded outcomes from a set of automatically
extracted text features; (3) generate machine-predicted scores for all
documents and use these scores to estimate treatment impacts; and (4) adjust
the final impact estimates using the residual differences between human-coded
and machine-predicted outcomes. As an extension to this approach, we also
develop a strategy for identifying an optimal subset of documents to code in
Step 1 in order to further enhance precision. Through an extensive simulation
study based on data from a recent field trial in education, we show that our
proposed approach can be used to reduce the scope of a human-coding effort
while maintaining nominal power to detect a significant treatment impact
- âŠ