100 research outputs found

    A Dependency Annotation Scheme to Extract Syntactic Features in Indonesian Sentences

    Get PDF
    In languages with fixed word orders, syntactic information is useful when solving natural language processing (NLP) problems. In languages like Indonesian, however, which has a relatively free word order, the usefulness of syntactic information has yet to be determined. In this study, a dependency annotation scheme for extracting syntactic features from a sentence is proposed. This annotation scheme adapts the Stanford typed dependency (SD) annotation scheme to cope with such phenomena in the Indonesian language as ellipses, clitics, and non-verb clauses. Later, this adapted annotation scheme is extended in response to the inability to avoid certain ambiguities in assigning heads and relations. The accuracy of these two annotation schemes are then compared, and the usefulness of the extended annotation scheme is assessed using the syntactic features extracted from dependency-annotated sentences in a preposition error correction task. The experimental results indicate that the extended annotation scheme improved the accuracy of a dependency parser, and the error correction task demonstrates that training data using syntactic features obtain better correction than training data that do not use such features, thus lending a positive answer to the research question

    Detecting grammatical errors with treebank-induced, probabilistic parsers

    Get PDF
    Today's grammar checkers often use hand-crafted rule systems that define acceptable language. The development of such rule systems is labour-intensive and has to be repeated for each language. At the same time, grammars automatically induced from syntactically annotated corpora (treebanks) are successfully employed in other applications, for example text understanding and machine translation. At first glance, treebank-induced grammars seem to be unsuitable for grammar checking as they massively over-generate and fail to reject ungrammatical input due to their high robustness. We present three new methods for judging the grammaticality of a sentence with probabilistic, treebank-induced grammars, demonstrating that such grammars can be successfully applied to automatically judge the grammaticality of an input string. Our best-performing method exploits the differences between parse results for grammars trained on grammatical and ungrammatical treebanks. The second approach builds an estimator of the probability of the most likely parse using grammatical training data that has previously been parsed and annotated with parse probabilities. If the estimated probability of an input sentence (whose grammaticality is to be judged by the system) is higher by a certain amount than the actual parse probability, the sentence is flagged as ungrammatical. The third approach extracts discriminative parse tree fragments in the form of CFG rules from parsed grammatical and ungrammatical corpora and trains a binary classifier to distinguish grammatical from ungrammatical sentences. The three approaches are evaluated on a large test set of grammatical and ungrammatical sentences. The ungrammatical test set is generated automatically by inserting common grammatical errors into the British National Corpus. The results are compared to two traditional approaches, one that uses a hand-crafted, discriminative grammar, the XLE ParGram English LFG, and one based on part-of-speech n-grams. In addition, the baseline methods and the new methods are combined in a machine learning-based framework, yielding further improvements

    AN ANALYSIS ON GRAMMATICAL ERRORS MADE BY THE SECOND GRADE STUDENTS IN WRITING DESCRIPTIVE TEXT OF SMP NEGERI 12 KOTA TEGAL IN ACADEMIC YEAR 2019/2020

    Get PDF
    ANEBA, SHARL. LA.2021. 1614500030. An Analysis On Grammatical Errors Made By The Second Grade Students Of Smp Negeri 12 Kota Tegal In Academic Year 2019/2020. Research Project. English Education. Teacher Training and Education Faculty. Pancasakti University Tegal. First Advisor Yuvita M.Pd. and Second Advisor Nur Aflahatun, M.Pd. Keywords: writing, Grammatical errors, Descriptive Text. Writing is one skill that must be mastered by the students. When mastering it, the students will be able to communicate with other through several kinds of genre-based writing such as descriptive, recount, narrative, procedure and report. They need to consider about the gramatical components of English that differ from Indonesian language. This study was carried out to find out the types of grammatical errors and the source of grammatical errors in descriptive text writing made by the second grade students of SMP Negeri 12 Tegal in academic year 2019/2020. The subject of this research is the second grade students of SMP Negeri 12 Tegal From 30 students as a sample, the researcher only took 20 students' answers. The method used in this study was a case study included as qualitative research. The data was presented in descriptive analysis way and the procedure of Error Analysis used is based on Dulay’s theory. The result of the research which is categorized as less favorite so that there is a desire to know about the English language skills of second grade students in Smp Negeri 12 Tegal. especially their English writing skills. based on the UNBK ranking for Smp Negeri 12 Tegal. The research found 98 errors which are in the students writing descriptive text. It consists of false analogy with the number of errors is 13 or 14.94 %, misanalysis with the number of errors is 25 or 28.73 %, incomplate rule application with the number of errors is 7 or 8.04 %, exploiting redudancy with the number of errors is 11 or 12.64 %, overlooking coocurrence restrictions with the number of errors is 11 or 12.64 %, and overgeneralization with the number of errors is 12 or 13.79 %. Based on the total result types of errors, the researcher found that misanalysis is the highest errors which is made by the students

    The role of feedback in the processes and outcomes of academic writing in english as a foreign language at intermediate and advanced levels

    Get PDF
    Providing feedback on students’ texts is one of the essential components of teaching second language writing. However, whether and to what extent students benefit from feedback has been an issue of considerable debate in the literature. While many researchers have stressed its importance, others expressed doubts about its effectiveness. Regardless of these continuing and well-established debates, instructors consider feedback as a worthwhile pedagogical practice for second language learning. Based on this premise, I conducted three experimental studies to investigate the role of written feedback in Myanmar and Hungarian tertiary EFL classrooms. Additionally, I studied syntactic features and language-related error patterns in Hungarian and Myanmar students’ writing. This attempt was made to understand how students with different writing proficiency acted upon teacher and automated feedback. The first study examined the efficacy of feedback on Myanmar students’ writing over a 13-week semester and how automated feedback provided by Grammarly could be integrated into writing instruction as an assistance tool for writing teachers. Results from pre-and post-tests demonstrated that students’ writing performance improved along the lines of four assessment criteria: task achievement, coherence and cohesion, grammatical range and accuracy, and lexical range and accuracy. Further results from a written feedback analysis revealed that the free version of Grammarly provided feedback on lower-level writing issues such as articles and prepositions, whereas teacher feedback covered both lower-and higher-level writing concerns. These findings suggested a potential for integrating automated feedback into writing instruction. As limited attention was given to how feedback influences other aspects of writing development beyond accuracy, the second study examined how feedback influences the syntactic complexity of Myanmar students’ essays. Results from paired sample t-tests revealed no significant differences in the syntactic complexity of students’ writing when the comparison was made between initial and revised texts and between pre-and post-tests. These findings suggested that feedback on students’ writing does not lead them to write less structurally complex texts despite not resulting in syntactic complexity gains. The syntactic complexity of students’ revised texts varied among high-, mid-, and low-achieving students. These variations could be attributed to proficiency levels, writing prompts, genre differences, and feedback sources. The rationale for conducting the third study was based on the theoretical orientation that differential success in learners’ gaining from feedback largely depended on their engagement with the feedback rather than the feedback itself. Along these lines of research, I examined Hungarian students’ behavioural engagement (i.e., students’ uptake or revisions prompted by written feedback) with teacher and automated feedback in an EFL writing course. In addition to the engagement with form-focused feedback examined in the first study, I considered meaning-focused feedback, as feedback in a writing course typically covers both linguistic and rhetorical aspects of writing. The results showed differences in feedback focus (the teacher provided form-and meaning-focused feedback) with unexpected outcomes: students’ uptake of feedback resulted in moderate to low levels of engagement with feedback. Participants incorporated more form-focused feedback than meaning-focused feedback into their revisions. These findings contribute to our understanding of students’ engagement with writing tasks, levels of trust, and the possible impact of students’ language proficiency on their engagement with feedback. Following the results that Myanmar and Hungarian students responded to feedback on their writing differently, I designed a follow-up study to compare syntactic features of their writing as indices of their English writing proficiency. In addition, I examined language-related errors in their texts to capture the differences in the error patterns in the two groups. Results from paired sample t-tests showed that most syntactic complexity indices distinguished the essays produced by the two groups: length of production units, sentence complexity, and subordination indices. Similarly, statistically significant differences were found in language-related error patterns in their texts: errors were more prevalent in Myanmar students’ essays. The implications for research and pedagogical practices in EFL writing classes are discussed with reference to the rationale for each study

    AN INVESTIGATION OF STUDENTS' EXPERIENCES WITH A WEB-BASED, DATA-DRIVEN WRITING ASSISTANCE ENVIRONMENT FOR IMPROVING KOREAN EFL WRITERS' ACCURACY WITH ENGLISH GRAMMAR AND VOCABULARY

    Get PDF
    Computer-assisted language learning (CALL) has played an increasingly important role in writing instruction and research. While research has been conducted on English as a second language (ESL) learners and the benefits of using web-based writing assistance programs in writing instruction, insufficient research has been done on English as a foreign language (EFL) students. This study is an empirical investigation of students' experiences with a web-based, data-driven writing assistance environment (e4writing) designed by the researcher to help Korean EFL writers with their grammar and vocabulary. This study investigated Korean university students' perceived difficulties with English grammar and vocabulary as they wrote in English. It also explored their perceptions of e4writing as used in a writing course to enhance English grammar and vocabulary. This study investigated 12 participants' perceptions and "academic profiles" (learning styles, confidence, motivation, and other factors) when they were enrolled in a 16-week course called Teaching Methods for English Composition. To gain a more specific and personal view, the study also included detailed case studies of four of the study participants. The major sources of data for the analyses include interviews, reflective journals, questionnaires, samples of the students' writing before and after their use of e4writing and the researcher's reflective notes. The study revealed that most of the students had difficulty with grammar and vocabulary in English writing. They positively perceived e4writing, as it provided individualized help on their problems with grammar and lexis. Overall, the students showed improvement in accuracy from the pretest to the posttest, and observations suggested that e4writing was probably related to this improvement; however, strong claims about e4writing as a cause of improvement cannot be made without a control group. The students felt e4writing was more beneficial for improving grammatical accuracy than for vocabulary accuracy. The students recommended that some features of e4writing be written in Korean to help students understand grammar and vocabulary explanations

    Corrective Feedback in the EFL Classroom: Grammar Checker vs. Teacher’s Feedback.

    Get PDF
    The aim of this doctoral thesis is to compare the feedback provided by the teacher to that obtained by the software called Grammar Checker on grammatical errors in the written production of English as a foreign language students. Traditionally, feedback has been considered as one of the three theoretical conditions for language learning (along with input and output) and, for this reason, extensive research has been carried out on who should provide it, when and the level of explicitness. However, there are far fewer studies that analyse the use of e-feedback programs as a complement or alternative to those offered by the teacher. Participants in our study were divided into two experimental groups and one control group, and three grammatical aspects that are usually susceptible to error in English students at B2 level were examined: prepositions, articles, and simple past-present/past perfect dichotomy. All participants had to write four essays. The first experimental group received feedback from the teacher and the second received it through the Grammar Checker program. The control group did not get feedback on the grammatical aspects of the analysis but on other linguistic forms not studied. The results obtained point, first of all, to the fact that the software did not mark grammatical errors in some cases. This means that students were unable to improve their written output in terms of linguistic accuracy after receiving feedback from the program. In contrast, students who received feedback from the teacher did improve, although the difference was not significant. Second, the two experimental groups outperformed the control group in the use of the grammatical forms under analysis. Thirdly, regardless of the feedback offered, the two groups showed improvement in the use of grammatical aspects in the long term, and finally, no differences in attitude towards the feedback received and its impact on the results were found in either of the experimental groups. Our results open up new lines for investigating corrective feedback in the English as a foreign language classroom, since more studies are needed that, on the one hand, influence the improvement of electronic feedback programs by making them more accurate and effective in the detection of errors. On the other hand, software such as Grammar Checker can be a complement to the daily practice of the foreign language teacher, helping in the first instance to correct common and recurring mistakes, even more so when our research has shown that attitudes towards this type of electronic feedback are positive and does not imply an intrusion into the classroom, thus helping in the acquisition of the English language.Programa de Doctorat en Llengües Aplicades, Literatura i Traducci

    Supporting Collocation Learning

    Get PDF
    Collocations are of great importance for second language learners. Knowledge of them plays a key role in producing language accurately and fluently. But such knowledge is difficult to acquire, simply because there is so much of it. Collocation resources for learners are limited. Printed dictionaries are restricted in size, and only provide rudimentary search and retrieval options. Free online resources are rare, and learners find the language data they offer hard to interpret. Online collocation exercises are inadequate and scattered, making it difficult to acquire collocations in a systematic way. This thesis makes two claims: (1) corpus data can be presented in different ways to facilitate effective collocation learning, and (2) a computer system can be constructed to help learners systematically strengthen and enhance their collocation knowledge. To investigate the first claim, an enormous Web-derived corpus was processed, filtered, and organized into three searchable digital library collections that support different aspects of collocation learning. Each of these constitutes a vast concordance whose entries are presented in ways that help students use collocations more effectively in their writing. To provide extended context, concordance data is linked to illustrative sample sentences, both on the live Web and in the British National Corpus. Two evaluations were conducted, both of which suggest that these collections can and do help improve student writing. For the second claim, a system was built that automatically identifies collocations in texts that teachers or students provide, using natural language processing techniques. Students study, collect and store collocations of interest while reading. Teachers construct collocation exercises to consolidate what students have learned and amplify their knowledge. The system was evaluated with teachers and students in classroom settings, and positive outcomes were demonstrated. We believe that the deployment of computer-based collocation learning systems is an exciting development that will transform language learning
    corecore