10 research outputs found
A systemic functional perspective on automated writing evaluation: formative feedback on causal discourse
Making explanations is a very important communicative function in academic literacy; several disciplines including science are dominated by causal explanations (Mohan & Slater, 2004; Slater, 2004; Wellington & Osborne, 2001). For academic success, students need to write about causes and effects well with the help of their instructors, which means that formative assessment of causal discourse is necessary (Slater & Mohan, 2010). However, manual evaluation of causal discourse is time-consuming and impractical for writing instructors. For this reason, automated evaluation of causal discourse, which current automated writing evaluation (AWE) systems cannot perform, is required. Addressing these needs, this dissertation aimed to develop an automated causal discourse evaluation tool (ACDET) and empirically evaluate learners’ causal discourse development with ACDET in academic writing classes.
ACDET was developed using three approaches: a functional linguistic approach, a hybrid natural language processing approach combining rule-based and statistical approaches, and a pedagogical approach. The linguistic approach helped identify causal discourse features by analyzing a small corpus of texts about causes and effects of economic events. ACDET detects seven types of causal discourse features and generates formative feedback based on them: causal conjunctions, causal adverbs, causal prepositions, causal verbs, causal adjectives, and causal nouns. The natural language processing approach allowed for assigning part-of-speech tags to sentences and words and creating hand-coded rules for the detection of causal discourse features. The pedagogical approach determined feedback features of ACDET, and it was informed by the theoretical perspectives of the Interaction Hypothesis and Systemic Functional Linguistics and findings of research on causal discourse development.
Causal discourse development with ACDET was empirically evaluated through a qualitative study in which four research questions investigated two criteria of computer-assisted language learning evaluation framework: language learning potential (i.e., focus on causal discourse form, interactional modifications, and causal discourse development) and focus on causal meaning. Participants of the study were 32 English as a second language learners who were students in two academic writing classes. Data consisted of pre- and post-tests, ACDET’s text-level feedback reports, cause-and-effect assignment drafts, screen capturing recordings, semi-structured interviews, and questionnaires.
The findings indicate language learning potential of ACDET: ACDET drew learners’ attention to causal discourse form and created opportunities for interactional modifications, however, resulted in limited causal discourse development. Findings also reveal that ACDET drew learners’ attention to causal meaning.
This study is an important attempt in the field of AWE to analyze meaning in written discourse automatically and provide causal discourse specific feedback. The fact that empirical evaluation of ACDET was based on process-oriented data revealing how students used ACDET in class is noteworthy. The findings of this study have important implications for the refinement of ACDET, the development of AWE systems, and research on causal discourse development
A telecollaborative approach to foster students' critical thinking skills
This study reports on a telecollaborative approach to foster students' critical thinking skills, more specifically to help them gain knowledge about a different educational culture and develop a critical perspective upon their own educational culture at the university. The study specifically examines the extent to which participation in telecollaboration enabled students to complete a critical thinking task, students' overall impressions of the telecollaboration, and the factors that affected the perceived success of their telecollaborative learning experiences. Undergraduate students taking a Critical Thinking course at a university in Turkey (n=53) telecollaborated with undergraduate students at a university in the USA for three weeks. They were given a critical thinking task, in which students were asked (a) to develop discussion questions that would elicit the information they needed for their arguments from their US partners, (b) to exchange information with their partners, (c) to compare their education with the US education and analyze their education from a critical-thinking perspective, (d) to develop three written arguments based on the telecollaboratively-exchanged information as their final product, and (e) to reflect upon the whole telecollaborative learning process. According to the analysis of their written argument grades and survey responses, telecollaboration provided students with an effective medium to complete the critical thinking task, although some students reported experiencing some problems. Suggestions are offered for better learning experiences in future telecollaborative implementations
Automated Error Detection for Developing Grammar Proficiency of ESL Learners
Thanks to natural language processing technologies, computer programs are actively being used not only for holistic scoring, but also for formative evaluation of writing. CyWrite is one such program that is under development. The program is built upon Second Language Acquisition theories and aims to assist ESL learners in higher education by providing them with effective formative feedback to facilitate autonomous learning and improvement of their writing skills. In this study, we focus on CyWrite’s capacity to detect grammatical errors in student writing. We specifically report on (1) computational and pedagogical approaches to the development of the tool in terms of students’ grammatical accuracy, and (2) the performance of our grammatical analyzer. We evaluated the performance of CyWrite on a corpus of essays written by ESL undergraduate students with regards to four types of grammatical errors: quantifiers, subject-verb agreement, articles, and run-on sentences. We compared CyWrite’s performance at detecting these errors to the performance of a well-known commercially available AWE tool, Criterion. Our findings demonstrated better performance metrics of our tool as compared to Criterion, and a deeper analysis of false positives and false negatives shed light on how CyWrite’s performance can be improved
Combined deployable keystroke logging and eyetracking for investigating L2 writing fluency
Although fluency is an important sub-construct of language proficiency, it has not received as much attention in L2 writing research as complexity and accuracy have, in part due to the lack of methodological approaches for the analysis of large datasets of writing-process data. This article presents a method of time-aligned keystroke logging and eye tracking and reports an empirical study investigating L2 writing fluency through this method. Twenty-four undergraduate students at a private university in Turkey performed two writing tasks delivered through a web text editor with embedded keystroke logging and eye-tracking capabilities. Linear mixed-effects models were fit to predict indices of pausing and reading behaviors based on language status (L1 vs. L2) and linguistic context factors. Findings revealed differences between pausing and eye-fixation behavior in L1 and L2 writing processes. The paper concludes by discussing the affordances of the proposed method from the theoretical and practical standpoints
A systemic functional perspective on automated writing evaluation: formative feedback on causal discourse
Making explanations is a very important communicative function in academic literacy; several disciplines including science are dominated by causal explanations (Mohan & Slater, 2004; Slater, 2004; Wellington & Osborne, 2001). For academic success, students need to write about causes and effects well with the help of their instructors, which means that formative assessment of causal discourse is necessary (Slater & Mohan, 2010). However, manual evaluation of causal discourse is time-consuming and impractical for writing instructors. For this reason, automated evaluation of causal discourse, which current automated writing evaluation (AWE) systems cannot perform, is required. Addressing these needs, this dissertation aimed to develop an automated causal discourse evaluation tool (ACDET) and empirically evaluate learners’ causal discourse development with ACDET in academic writing classes.
ACDET was developed using three approaches: a functional linguistic approach, a hybrid natural language processing approach combining rule-based and statistical approaches, and a pedagogical approach. The linguistic approach helped identify causal discourse features by analyzing a small corpus of texts about causes and effects of economic events. ACDET detects seven types of causal discourse features and generates formative feedback based on them: causal conjunctions, causal adverbs, causal prepositions, causal verbs, causal adjectives, and causal nouns. The natural language processing approach allowed for assigning part-of-speech tags to sentences and words and creating hand-coded rules for the detection of causal discourse features. The pedagogical approach determined feedback features of ACDET, and it was informed by the theoretical perspectives of the Interaction Hypothesis and Systemic Functional Linguistics and findings of research on causal discourse development.
Causal discourse development with ACDET was empirically evaluated through a qualitative study in which four research questions investigated two criteria of computer-assisted language learning evaluation framework: language learning potential (i.e., focus on causal discourse form, interactional modifications, and causal discourse development) and focus on causal meaning. Participants of the study were 32 English as a second language learners who were students in two academic writing classes. Data consisted of pre- and post-tests, ACDET’s text-level feedback reports, cause-and-effect assignment drafts, screen capturing recordings, semi-structured interviews, and questionnaires.
The findings indicate language learning potential of ACDET: ACDET drew learners’ attention to causal discourse form and created opportunities for interactional modifications, however, resulted in limited causal discourse development. Findings also reveal that ACDET drew learners’ attention to causal meaning.
This study is an important attempt in the field of AWE to analyze meaning in written discourse automatically and provide causal discourse specific feedback. The fact that empirical evaluation of ACDET was based on process-oriented data revealing how students used ACDET in class is noteworthy. The findings of this study have important implications for the refinement of ACDET, the development of AWE systems, and research on causal discourse development.</p
Automated Error Detection for Developing Grammar Proficiency of ESL Learners
Thanks to natural language processing technologies, computer programs are actively being used not only for holistic scoring, but also for formative evaluation of writing. CyWrite is one such program that is under development. The program is built upon Second Language Acquisition theories and aims to assist ESL learners in higher education by providing them with effective formative feedback to facilitate autonomous learning and improvement of their writing skills. In this study, we focus on CyWrite’s capacity to detect grammatical errors in student writing. We specifically report on (1) computational and pedagogical approaches to the development of the tool in terms of students’ grammatical accuracy, and (2) the performance of our grammatical analyzer. We evaluated the performance of CyWrite on a corpus of essays written by ESL undergraduate students with regards to four types of grammatical errors: quantifiers, subject-verb agreement, articles, and run-on sentences. We compared CyWrite’s performance at detecting these errors to the performance of a well-known commercially available AWE tool, Criterion. Our findings demonstrated better performance metrics of our tool as compared to Criterion, and a deeper analysis of false positives and false negatives shed light on how CyWrite’s performance can be improved.This article is published as Feng, H.-H.*, Saricaoglu, A.*, & Chukharev-Hudilainen, E. (2016). Automated error detection for developing grammar proficiency of ESL learners. CALICO Journal, 33(1), 49–70, DOI: 10.1558/cj.v33i1.26507. Posted with permission.</p