1,532 research outputs found

    Product and Process in Toefl iBT Independent and Integrated Writing Tasks: A Validation Study

    Get PDF
    This study was conducted to compare the writing performance (writing products and writing processes) of the TOEFL iBT integrated writing task (writing from source texts) with that of the TOEFL iBT independent writing task (writing from prompt only). The study aimed to find out whether writing performance varies with task type, essay scores, and academic experience of test takers, thus clarifying the link between the expected scores and the underlying writing abilities being assessed. The data for the quantitative textual analysis of written products was provided by Educational Testing Service (ETS). The data consisted of scored integrated and independent essays produced by 240 test takers. Coh-Metrix (an automated text analysis tool) was used to analyze the linguistic features of the 480 essays. Statistic analysis results revealed the linguistic features of the essays varied with task type and essay scores. However, the study did not find significant impact of the academic experience of the test takers on most of the linguistic features investigated. In analyzing the writing process, 20 English as a second language students participated in think-aloud writing sessions. The writing tasks were the same tasks used in the textual analysis section. The writing processes of the 20 participants was coded for individual writing behaviors and compared across the two writing tasks. The writing behaviors identified were also examined in relation to the essay scores and the academic experience of the participants. Results indicated that the writing behaviors varied with task type but not with the essay scores or the academic experience of the participants in general. Therefore, the results of the study provided empirical evidence showing that the two tasks elicited different writing performance, thus justifying the concurrent use of them on a test. Furthermore, the study also validated the scoring rubrics used in evaluating the writing performance and clarified the score meaning. Implications of the current study were also discussed

    An exploratory study into automated précis grading

    Get PDF
    Automated writing evaluation is a popular research field, but the main focus has been on evaluating argumentative essays. In this paper, we consider a different genre, namely précis texts. A précis is a written text that provides a coherent summary of main points of a spoken or written text. We present a corpus of English précis texts which all received a grade assigned by a highly-experienced English language teacher and were subsequently annotated following an exhaustive error typology. With this corpus we trained a machine learning model which relies on a number of linguistic, automatic summarization and AWE features. Our results reveal that this model is able to predict the grade of précis texts with only a moderate error margin

    Blog posts and traditional assignments by first- and second-language writers

    Get PDF
    This study investigates differences in the language and discourse characteristics of course blogs and traditional academic submissions produced in English by native (L1) and advanced second language (L2) writers. One hundred and fifty two texts generated by 38 graduate students within the context of the same Master’s level course were analysed using Coh-Metrix indices at the surface code, textbase and situation model levels. The two text types differed in their lexical sophistication, syntactic complexity, use of cohesion and agency. Overall, the traditional course assignments were more formal, lexically sophisticated and syntactically complex, while the blog posts contained more semantic and situational redundancy, resulting in higher readability, and communicated a clearer sense of agency. There were also reliable differences between the textual artefacts generated by the L1 and L2 writers, one of which was a more traditional impersonal academic style of the L2 texts. Although no interaction was observed between the two independent variables in the Coh-Metrix analyses, an additional analysis of human ratings showed that the blog posts were rated lower on the use of language than traditional assignments for the L2, but not L1, writers. Limitations of the computational text analysis and pedagogical implications of the findings are considered

    Applications of Text Analysis Tools for Spoken Response Grading

    Get PDF

    Sentiment and Sentence Similarity as Predictors of Integrated and Independent L2 Writing Performance

    Get PDF
    This study aimed to utilize sentiment and sentence similarity analyses, two Natural Language Processing techniques, to see if and how well they could predict L2 Writing Performance in integrated and independent task conditions. The data sources were an integrated L2 writing corpus of 185 literary analysis essays and an independent L2 writing corpus of 500 argumentative essays, both of which were compiled in higher education contexts. Both essay groups were scored between 0 and 100. Two Python libraries, TextBlob and SpaCy, were used to generate sentiment and sentence similarity data. Using sentiment (polarity and subjectivity) and sentence similarity variables, regression models were built and 95% prediction intervals were compared for integrated and independent corpora. The results showed that integrated L2 writing performance could be predicted by subjectivity and sentence similarity. However, only subjectivity predicted independent L2 writing performance. The prediction interval of subjectivity for independent writing model was found to be narrower than the same interval for integrated writing. The results show that the sentiment and sentence similarity analysis algorithms can be used to generate complementary data to improve more complex multivariate L2 writing performance prediction models

    ReaderBench goes Online: A Comprehension-Centered Framework for Educational Purposes

    Get PDF
    International audienceIn this paper we introduce the online version of our ReaderBench framework, which includes multi-lingual comprehension-centered web services designed to address a wide range of individual and collaborative learning scenarios, as follows. First, students can be engaged in reading a course material, then eliciting their understanding of it; the reading strategies component provides an in-depth perspective of comprehension processes. Second, students can write an essay or a summary; the automated essay grading component provides them access to more than 200 textual complexity indices covering lexical, syntax, semantics and discourse structure measurements. Third, students can start discussing in a chat or a forum; the Computer Supported Collaborative Learning (CSCL) component provides in- depth conversation analysis in terms of evaluating each member’s involvement in the CSCL environments. Eventually, the sentiment analysis, as well as the semantic models and topic mining components enable a clearer perspective in terms of learner’s points of view and of underlying interests

    ReaderBench goes Online: A Comprehension-Centered Framework for Educational Purposes

    No full text
    International audienceIn this paper we introduce the online version of our ReaderBench framework, which includes multi-lingual comprehension-centered web services designed to address a wide range of individual and collaborative learning scenarios, as follows. First, students can be engaged in reading a course material, then eliciting their understanding of it; the reading strategies component provides an in-depth perspective of comprehension processes. Second, students can write an essay or a summary; the automated essay grading component provides them access to more than 200 textual complexity indices covering lexical, syntax, semantics and discourse structure measurements. Third, students can start discussing in a chat or a forum; the Computer Supported Collaborative Learning (CSCL) component provides in- depth conversation analysis in terms of evaluating each member’s involvement in the CSCL environments. Eventually, the sentiment analysis, as well as the semantic models and topic mining components enable a clearer perspective in terms of learner’s points of view and of underlying interests

    A Corpus-Based Analysis of Cohesion in L2 Writing by Undergraduates in Ecuador

    Get PDF
    In finding out the nature of cohesion in L2 writing, the present study set out to address three research questions: (1) What types of cohesion relations occur in L2 writing at the sentence, paragraph, and whole-text levels? (2) What is the relationship between lexico-grammatical cohesion features and teachers’ judgements of writing quality? (3) Do expectations of cohesion suggested by the CEFR match what is found in student writing? To answer those questions, a corpus of 240 essays and 240 emails from college- level students learning English as a foreign language in Ecuador enabled the analysis of cohesion. Each text included the scores, or teachers’ judgements of writing quality aligned to the upper-intermediate level (or B2) as proposed by the Common European Framework of Reference for learning, teaching, and assessing English as a foreign language. Lexical and grammatical items used by L2 students to build relationships of meaning in sentences, paragraphs, and the entire text were considered to analyse cohesion in L2 writing. Utilising Natural Language Processing tools (e.g., TAACO, TextInspector, NVivo), the analysis focused on determining which cohesion features (e.g., word repetition/overlap, semantical similarity, connective words) predicted the teachers’ judgements of writing quality in the collected essays and emails. The findings indicate that L2 writing is characterised by word overlap and synonyms occurring at the paragraph level and, to a lesser degree, cohesion between sentences and the entire text (e.g., connective words). Whilst these cohesion features positively and negatively predicted the teachers’ scores, a cautious interpretation of these findings is required, as many other factors beyond cohesion features must have also influenced the allocation of scores in L2 writing
    • 

    corecore