804 research outputs found

    Design and implementation of a pedagogic intervention using writing analytics

    Full text link
    © 2017 Asia-Pacific Society for Computers in Education. All rights reserved. Academic writing is a key skill required for higher education students, which is often challenging to learn. A promising approach to help students develop this skill is the use of automated tools that provide formative feedback on writing. However, such tools are not widely adopted by students unless useful for their discipline-related writing, and embedded in the curriculum. This recognition motivates an increased emphasis in the field on aligning learning analytics applications with learning design, so that analytics-driven feedback is congruent with the pedagogy and assessment regime. This paper describes the design, implementation, and evaluation of a pedagogic intervention that was developed for law students to make use of an automated Academic Writing Analytics tool (AWA) for improving their academic writing. In exemplifying this pedagogically aligned learning analytic intervention, we describe the development of a learning analytics platform to support the pedagogic design, illustrating its potential through example analyses of data derived from the task

    Critical perspectives on writing analytics

    Get PDF
    Writing Analytics focuses on the measurement and analysis of written texts for the purpose of understanding writing processes and products, in their educational contexts, and improving the teaching and learning of writing. This workshop adopts a critical, holistic perspective in which the definition of "the system" and "success" is not restricted to IR metrics such as precision and recall, but recognizes the many wider issues that aid or obstruct analytics adoption in educational settings, such as theoretical and pedagogical grounding, usability, user experience, stakeholder design engagement, practitioner development, organizational infrastructure, policy and ethics

    A Study on the Effectiveness of Automated Essay Marking in the Context of a Blended Learning Course Design

    Get PDF
    This paper reports on a study undertaken in a Chinese university in order to investigate the effectiveness of an online automated essay marking system in the context of a Blended Learning course design. Two groups of undergraduate learners studying English were required to write essays as part of their normal course. One group had their essays marked by an online automated essay marking and feedback system, the second, control group were marked by a tutor who provided feedback in the normal way. Their essay scores and attitudes to the essay writing tasks were compared. It was found that learners were not disadvantaged by the automated essay marking system. Their mean performance was better (p<0.01) than the tutor marked control for seven of the essays and showed no difference for three essays. In no case did the tutor marked essay group score higher than the automated system. Correlations were performed that indicated that for both groups there was a significant improvement in performance (p<0.05) over the duration of the course and that there was a significant relationship between essay scores for the groups (p<0.01). An investigation of attitude to the automated system as compared to the tutor marked system was more complex. It was found that there was a significant difference in the attitudes of those classified as low and high performers (p<0.05). In the discussion these findings are placed in a Blended Learning context

    Testing academic literacy in reading and writing for university admissions

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Master of Arts by Research.Currently university entrance decisions are heavily reliant on further education qualifications and language proficiency tests, with little focus on academic literacy skills that are required to succeed at university. This thesis attempts to define what academic literacy skills are and to what extent they correlate with three measures of university success. To answer these two research questions, I first investigated what academic literacy skills are through a survey of the literature, university study skills websites and existing academic literacy tests, and from these results drew up a checklist for academic literacy test validation. I then attempted to validate a new academic literacy test through a mixed methods study: first by calculating the correlations between performance in this test and university grades, self-assessment and tutor assessment, then through a case study approach to investigate these relationships in more detail. My tentative findings are that, within the humanities and social sciences, the academic literacy test is likely to correlate strongly with university grades, both in the overall results and in two of the four marking criteria: coherence and cohesion, and engagement with sources, with some possibility of correlation in the argument criterion. The fourth criterion – academic language use – did not correlate, but this may be an effect of this particular participant sample rather than the test itself. I also suggest two areas that may be difficult to elicit under timed exam conditions: eliciting appropriate source use when sources are provided, and eliciting synthesis of ideas across two or more given sources

    A FOCUS ON CONTENT: THE USE OF RUBRICS IN PEER REVIEW TO GUIDE STUDENTS AND INSTRUCTORS

    Get PDF
    Students who are solving open-ended problems would benefit from formative assessment, i.e., from receiving helpful feedback and from having an instructor who is informed about their level of performance. Open-ended problems challenge existing assessment techniques. For example, such problems may have reasonable alternative solutions, or conflicting objectives. Analyses of open-ended problems are often presented as free-form text since they require arguments and justifications for one solution over others, and students may differ in how they frame the problems according to their knowledge, beliefs and attitudes.This dissertation investigates how peer review may be used for formative assessment. Computer-Supported Peer Review in Education, a technology whose use is growing, has been shown to provide accurate summative assessment of student work, and peer feedback can indeed be helpful to students. A peer review process depends on the rubric that students use to assess and give feedback to each other. However, it is unclear how a rubric should be structured to produce feedback that is helpful to the student and at the same time to yield information that could be summarized for the instructor.The dissertation reports a study in which students wrote individual analyses of an open-ended legal problem, and then exchanged feedback using Comrade, a web application for peer review. The study compared two conditions: some students used a rubric that was relevant to legal argument in general (the domain-relevant rubric), while others used a rubric that addressed the conceptual issues embedded in the open-ended problem (the problem-specific rubric).While both rubric types yield peer ratings of student work that approximate the instructor's scores, feedback elicited by the domain-relevant rubric was redundant across its dimensions. On the contrary, peer ratings elicited by the problem-specific rubric distinguished among its dimensions. Hierarchical Bayesian models showed that ratings from both rubrics can be fit by pooling information across students, but only problem-specific ratings are fit better given information about distinct rubric dimensions

    Machine Scoring of Student Essays: Truth and Consequences

    Get PDF
    The current trend toward machine-scoring of student work, Ericsson and Haswell argue, has created an emerging issue with implications for higher education across the disciplines, but with particular importance for those in English departments and in administration. The academic community has been silent on the issue—some would say excluded from it—while the commercial entities who develop essay-scoring software have been very active. Machine Scoring of Student Essays is the first volume to seriously consider the educational mechanisms and consequences of this trend, and it offers important discussions from some of the leading scholars in writing assessment.https://digitalcommons.usu.edu/usupress_pubs/1138/thumbnail.jp

    Are You Being Rhetorical? A Description of Rhetorical Move Annotation Tools and Open Corpus of Sample Machine-Annotated Rhetorical Moves

    Full text link
    Writing analytics has emerged as a sub-field of learning analytics, with applications including the provision of formative feedback to students in developing their writing capacities. Rhetorical markers in writing have become a key feature in this feedback, with a number of tools being developed across research and teaching contexts. However, there is no shared corpus of texts annotated by these tools, nor is it clear how the tool annotations compare. Thus, resources are scarce for comparing tools for both tool development and pedagogic purposes. In this paper, we conduct such a comparison and introduce a sample corpus of texts representative of the particular genres, a subset of which has been annotated using three rhetorical analysis tools (one of which has two versions). This paper aims to provide both a description of the tools and a shared dataset in order to support extensions of existing analyses and tool design in support of writing skill development. We intend the description of these tools, which share a focus on rhetorical structures, alongside the corpus, to be a preliminary step to enable further research, with regard to both tool development and tool interaction</jats:p
    • 

    corecore