805 research outputs found
Automated Detection of Usage Errors in non-native English Writing
In an investigation of the use of a novelty detection algorithm for identifying inappropriate word
combinations in a raw English corpus, we employ an
unsupervised detection algorithm based on the one-
class support vector machines (OC-SVMs) and extract
sentences containing word sequences whose frequency
of appearance is significantly low in native English
writing. Combined with n-gram language models and
document categorization techniques, the OC-SVM classifier assigns given sentences into two different
groups; the sentences containing errors and those
without errors. Accuracies are 79.30 % with bigram
model, 86.63 % with trigram model, and 34.34 % with four-gram model
Parsing Argumentation Structures in Persuasive Essays
In this article, we present a novel approach for parsing argumentation
structures. We identify argument components using sequence labeling at the
token level and apply a new joint model for detecting argumentation structures.
The proposed model globally optimizes argument component types and
argumentative relations using integer linear programming. We show that our
model considerably improves the performance of base classifiers and
significantly outperforms challenging heuristic baselines. Moreover, we
introduce a novel corpus of persuasive essays annotated with argumentation
structures. We show that our annotation scheme and annotation guidelines
successfully guide human annotators to substantial agreement. This corpus and
the annotation guidelines are freely available for ensuring reproducibility and
to encourage future research in computational argumentation.Comment: Under review in Computational Linguistics. First submission: 26
October 2015. Revised submission: 15 July 201
Recommended from our members
Modelling text meta-properties in automated text scoring for non-native English writing
Automated text scoring (ATS) is the task of automatically scoring a text based on some given grading criteria. This thesis focuses on ATS in the context of free-text writing exams aimed at learners of English as a foreign language (EFL). The benefit of an ATS system is primarily to provide instant and consistent feedback to language learners, and service reliability also forms a crucial part of an ATS system. Based on previous work, we investigated only partially explored meta-properties in text and integrated them into a machine learning based ATS model across multiple datasets:
In most previous work, the proposed models implicitly assume that texts produced by learners in an exam are written independently. However, this is not true for the exams where learners are required to compose multiple texts. We hence explicitly instructed our model which texts are written by the same learner, which boosts model performance in most cases.
We used three intra-exam properties within the same exam including prompt, genre and task as a starting point, and we showed that explicitly modelling these properties via frustratingly easy domain adaptation (FEDA) can positively affect model performance in some cases. Furthermore, modelling multiple intra-exam properties together is better than modelling any single property individually or no property in four out of five test sets.
We studied how to utilise and combine learners' responses from multiple writing exams. We also proposed a new variant of the transfer-learning ATS model which mitigates the drawbacks of previous work. This variant first builds a ranking model across multiple datasets via FEDA, and the ranking score of each text predicted by the ranking model is used as an extra feature in the baseline model. This variants gives improvement compared to a baseline model on the development sets in terms of root-mean-square error. Furthermore, the transfer-learning model utilising multiple datasets tuned on each development set is always better than the baseline model on the corresponding test set.
We found that different datasets favour different meta properties. We therefore combined all the models looking at different meta properties together using ensemble learning. Compared to the baseline model, the combined model has a statistically significant improvement on all the test sets in terms of root-mean-square error based on a permutation test.The Institute for Automated Language Teaching and Assessmen
A survey on author profiling, deception, and irony detection for the Arabic language
"This is the peer reviewed version of the following article: [FULL CITE], which has been published in final form at [Link to final article using the DOI]. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving."[EN] The possibility of knowing people traits on the basis of what they write is a field of growing interest named author profiling. To infer a user's gender, age, native language, language variety, or even when the user lies, simply by analyzing her texts, opens a wide range of
possibilities from the point of view of security. In this paper, we review the state of the art about some of the main author profiling problems, as well as deception and irony detection, especially focusing on the Arabic language.Qatar National Research Fund, Grant/Award Number: NPRP 9-175-1-033Rosso, P.; Rangel-Pardo, FM.; Hernandez-Farias, DI.; Cagnina, L.; Zaghouani, W.; Charfi, A. (2018). A survey on author profiling, deception, and irony detection for the Arabic language. Language and Linguistics Compass. 12(4):1-20. https://doi.org/10.1111/lnc3.12275S120124Abuhakema , G. Faraj , R. Feldman , A. Fitzpatrick , E. 2008 Annotating an arabic learner corpus for error Proceedings of The sixth international conference on Language Resources and Evaluation, LREC 2008Adouane , W. Dobnik , S. 2017 Identification of languages in algerian arabic multilingual documents Proceedings of The Third Arabic Natural Language Processing Workshop (WANLP)Adouane , W. Semmar , N. Johansson , R 2016a Romanized berber and romanized arabic automatic language identification using machine learning Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects; COLING 53 61Adouane , W. Semmar , N. Johansson , R. 2016b ASIREM participation at the discriminating similar languages shared task 2016 Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects; COLING 163 169Adouane , W. Semmar , N. Johansson , R. Bobicev , V. 2016c Automatic detection of arabicized berber and arabic varieties Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects; COLING 63 72Alfaifi , A. Atwell , E. Hedaya , I. 2014 Arabic learner corpus (ALC) v2: A new written and spoken corpus of Arabic learnersAlharbi , K. 2015 The irony volcano explodes black comedyAli , A. Bell , P. Renals , S. 2015 Automatic dialect detection in Arabic broadcast speechAlmeman , K. Lee , M. 2013 Automatic building of Arabic multi dialect text corpora by bootstrapping dialect words 1 6Aloshban , N. Al-Dossari , H. 2016 A new approach for group spam detection in social media for Arabic language (AGSD) 20 23Al-Sabbagh , R. Girju , R. 2012 YADAC: Yet another dialectal Arabic corpusAlsmearat , K. Al-Ayyoub , M. Al-Shalabi , R. 2014 An extensive study of the bag-of-words approach for gender identification of Arabic articlesAlsmearat , K. Shehab , M. Al-Ayyoub , M. Al-Shalabi , R. Kanaan , G. 2015 Emotion analysis of Arabic articles and its impact on identifying the authors genderArfath , P. Al-Badrashiny , M. Diab , M. El Kholy , A. Eskander , R. Habash , N. Pooleery , M. Rambow , O. Roth , R. M. 2014 MADAMIRA: A fast, comprehensive tool for morphological analysis and disambiguation of ArabicBarbieri , F. Basile , V. Croce , D. Nissim , M. Novielli , N. Patti , V. 2016 Overview of the Evalita 2016 sentiment polarity classification taskBarbieri , F. Saggion , H 2014 Modelling irony in twitter 56 64Barbieri , F. Saggion , H. Ronzano , F 2014 Modelling sarcasm in Twitter, a novel approachBasile , V. Bolioli , A. Nissim , M. Patti , V. Rosso , P. 2014 Overview of the Evalita 2014 sentiment polarity classification taskBlanchard, D., Tetreault, J., Higgins, D., Cahill, A., & Chodorow, M. (2013). TOEFL11: A CORPUS OF NON-NATIVE ENGLISH. ETS Research Report Series, 2013(2), i-15. doi:10.1002/j.2333-8504.2013.tb02331.xBosco, C., Patti, V., & Bolioli, A. (2013). Developing Corpora for Sentiment Analysis: The Case of Irony and Senti-TUT. IEEE Intelligent Systems, 28(2), 55-63. doi:10.1109/mis.2013.28Bouamor , H. Habash , N. Salameh , M. Zaghouani , W. Rambow , O. Abdulrahim , D. Oflazer , K. 2018 The MADAR Arabic Dialect Corpus and LexiconBouchlaghem , R. Elkhlifi , A. Faiz , R. 2014 Tunisian dialect Wordnet creation and enrichment using web resources and other Wordnets 104 113 https://doi.org/10.3115/v1/W14-3613Boujelbane , R. BenAyed , S. Belguith , L. H. 2013 Building bilingual lexicon to create dialect Tunisian corpora and adapt language modelCagnina L. Rosso , P 2015 Classification of deceptive opinions using a low dimensionality representationCavalli-Sforza , V. Saddiki , H. Bouzoubaa , K. Abouenour , L. Maamouri , M. Goshey , E. 2013 Bootstrapping a Wordnet for an Arabic dialect from other Wordnets and dictionary resourcesCotterell , R. Callison-Burch , C. 2014 A multi-dialect, multi-genre corpus of informal written ArabicDahlmeier , D. Tou Ng , H. Mei Wu , S. 2013 Building a large annotated corpus of learner English: the NUS corpus of learner English 22 31Darwish , K. Sajjad , H. Mubarak , H. 2014 Verifiably effective Arabic dialect identification 1465 1468Duh , K. Kirchhoff , K. 2006 Lexicon acquisition for dialectal Arabic using transductive learningElfardy , E. Diab , M. T. 2013 Sentence level dialect identification in Arabic 456 461Estival , D. Gaustad , T. Hutchinson , B. Bao-Pham , S. Radford , W. 2008 Author profiling for English and Arabic emailsFitzpatrick, E., Bachenko, J., & Fornaciari, T. (2015). Automatic Detection of Verbal Deception. Synthesis Lectures on Human Language Technologies, 8(3), 1-119. doi:10.2200/s00656ed1v01y201507hlt029Franco-Salvador, M., Rangel, F., Rosso, P., TaulĂ©, M., & Antònia MartĂt, M. (2015). Language Variety Identification Using Distributed Representations of Words and Documents. Experimental IR Meets Multilinguality, Multimodality, and Interaction, 28-40. doi:10.1007/978-3-319-24027-5_3Ghosh , A. Li , G. Veale , T. Rosso , P. Shutova , E. Barnden , J. Reyes , A. 2015 Semeval-2015 task 11: Sentiment analysis of figurative language in twitter 470 478Graff , D. Maamouri , M. 2012 Developing LMF-XML bilingual dictionaries for colloquial Arabic dialects 269 274Habash , N. Khalifa , S. Eryani , F. Rambow , O. Abdulrahim , D. Erdmann , A. Saddiki , H. 2018 Unified Guidelines and Resources for Arabic Dialect OrthographyHabash , N. Rambow , O. Kiraz , G. 2005 Morphological analysis and generation for Arabic dialectsHaggan, M. (1991). Spelling errors in native Arabic-speaking English majors: A comparison between remedial students and fourth year students. System, 19(1-2), 45-61. doi:10.1016/0346-251x(91)90007-cHassan , H. Daud , N. M. 2011 Corpus analysis of conjunctions: Arabic learners difficulties with collocationsHayes-Harb, R. (2006). Native Speakers of Arabic and ESL Texts: Evidence for the Transfer of Written Word Identification Processes. TESOL Quarterly, 40(2), 321. doi:10.2307/40264525Hernández-FarĂas, I., BenedĂ, J.-M., & Rosso, P. (2015). Applying Basic Features from Sentiment Analysis for Automatic Irony Detection. Lecture Notes in Computer Science, 337-344. doi:10.1007/978-3-319-19390-8_38Hernández Fusilier, D., Montes-y-GĂłmez, M., Rosso, P., & Guzmán Cabrera, R. (2015). Detecting positive and negative deceptive opinions using PU-learning. Information Processing & Management, 51(4), 433-443. doi:10.1016/j.ipm.2014.11.001Karoui , J. Benamara , F. Moriceau , V. Aussenac-Gilles , N. Hadrich Belguith , L. 2015 Towards a contextual pragmatic model to detect irony in tweetsKaroui , J. Zitoune , F. B. Moriceau , V. 2017 SOUKHRIA: Towards an irony detection system for Arabic in social mediaLjubesic , N. Mikelic , N. Boras , D. 2007 Language identification: How to distinguish similar languagesLĂłpez-Monroy, A. P., Montes-y-GĂłmez, M., Escalante, H. J., Villaseñor-Pineda, L., & Stamatatos, E. (2015). Discriminative subprofile-specific representations for author profiling in social media. Knowledge-Based Systems, 89, 134-147. doi:10.1016/j.knosys.2015.06.024Magdy, W., Darwish, K., & Weber, I. (2016). #FailedRevolutions: Using Twitter to study the antecedents of ISIS support. First Monday. doi:10.5210/fm.v21i2.6372Maier , W. Gomez-Rodriguez , C. 2014 Language variety identification in Spanish tweetsMalmasi , S. Dras , M. 2014 Arabic native language identificationMechti , S. Abbassi , A. Belguith , L. H. Faiz , R. 2016 An empirical method using features combination for Arabic native language identificationMukherjee, A., Liu, B., & Glance, N. (2012). Spotting fake reviewer groups in consumer reviews. Proceedings of the 21st international conference on World Wide Web - WWW ’12. doi:10.1145/2187836.2187863Proceedings of the EMNLP’2014 Workshop on Language Technology for Closely Related Languages and Language Variants. (2014). doi:10.3115/v1/w14-42Pennebaker , J. W. Chung , C. K. Ireland , M. E. Gonzales , A. L. Booth , R. J. 2007 The development and psychometric properties of LIWC2007 http://www.liwc.net/LIWC2007LanguageManual.pdf http://liwc.netPotthast , M. Rangel , F. Tschuggnall , M. Stamatatos , E. Rosso , P. Stein , B. 2017 Overview of PAN'17 G. Jones 10456 Springer, ChamRandall M. Groom , N. 2009 The BUiD Arab learner corpus: a resource for studying the acquisition of l2 English spellingRangel , F. Rosso , P. 2015 On the multilingual and genre robustness of emographs for author profiling in social media 274 280 Springer-Verlag, LNCSRangel, F., & Rosso, P. (2016). On the impact of emotions on author profiling. Information Processing & Management, 52(1), 73-92. doi:10.1016/j.ipm.2015.06.003Rangel , F. Rosso , P. Koppel , M. Stamatatos , E. Inches , G. 2013 Overview of the author profiling task at PAN 2013 P. Forner R. Navigli D. TufisRangel , F. Rosso , P. Potthast , M. Stein , B. Daelemans , W. 2015 Overview of the 3rd author profiling task at PAN 2015 L. Cappellato N. Ferro G. Jones E. San JuanRangel , F. Rosso , P. Verhoeven , B. Daelemans , W. Potthast , M. Stein , B. 2016 Overview of the 4th author profiling task at PAN 2016: Cross-genre evaluationsRefaee , E. Rieser , V. 2014 An Arabic twitter corpus for subjectivity and sentiment analysis 2268 2273Reyes, A., Rosso, P., & Buscaldi, D. (2012). From humor recognition to irony detection: The figurative language of social media. Data & Knowledge Engineering, 74, 1-12. doi:10.1016/j.datak.2012.02.005Reyes, A., Rosso, P., & Veale, T. (2012). A multidimensional approach for detecting irony in Twitter. Language Resources and Evaluation, 47(1), 239-268. doi:10.1007/s10579-012-9196-xRosso, P., & Cagnina, L. C. (2017). Deception Detection and Opinion Spam. Socio-Affective Computing, 155-171. doi:10.1007/978-3-319-55394-8_8Saâdane , H. 2015 Traitement Automatique de L'Arabe Dialectalise: Aspects Methodologiques et AlgorithmiquesSaâdane , H. Nouvel , D. Seffih , H. Fluhr , C. 2017 Une approche linguistique pour la dĂ©tection des dialectes arabesSadat , F. Kazemi , F. Farzindar , A. 2014 Automatic identification of Arabic language varieties and dialects in social mediaSadhwani , P. 2005 Phonological and orthographic knowledge: An Arab-Emirati perspectiveSchler , J. Koppel , M. Argamon , S. Pennebaker , J. W. 2006 Effects of age and gender on blogging 199 205Shoufan , A. Al-Ameri , S. 2015 Natural language processing for dialectical Arabic: A surveySoliman , T. Elmasry , M. Hedar , A-R. Doss , M. 2013 MINING SOCIAL NETWORKS' ARABIC SLANG COMMENTSSulis, E., IrazĂş Hernández FarĂas, D., Rosso, P., Patti, V., & Ruffo, G. (2016). Figurative messages and affect in Twitter: Differences between #irony, #sarcasm and #not. Knowledge-Based Systems, 108, 132-143. doi:10.1016/j.knosys.2016.05.035Tetreault , J. Blanchard , D. Cahill , A. 2013 A report on the first native language identification shared task Proceedings of the 8th Workshop on Innovative Use of NLP for Building Educational Applications 48 57Tillmann , C. Mansour , S. Al Onaizan , Y. 2014 Improved sentence-level Arabic dialect classification Proceedings of the VarDia006C Workshop 110 119Tono, Y. (2012). International Corpus of Crosslinguistic Interlanguage: Project overview and a case study on the acquisition of new verb co-occurrence patterns. Tokyo University of Foreign Studies, 27-46. doi:10.1075/tufs.4.07tonWahsheh , H. A. Al-Kabi , M. N. Alsmadi , I. M. 2013b SPAR: A system to detect spam in Arabic opinionsZaghouani , W. Charfi , A. 2018a Arap-Tweet: A Large Multi-Dialect Twitter Corpus for Gender, Age and Language Variety Identification Miyazaki, JapanZaghouani , W. Charfi , A. 2018b Guidelines and Annotation Framework for Arabic Author Profiling Miyazaki, JapanZaghouani , W. Mohit , B. Habash , N. Obeid , O. Tomeh , N. Rozovskaya , A. Farra , N. Alkuhlani , S. Oflazer , K. 2014 Large scale Arabic error annotation: Guidelines and frameworkZaghouani , W. Habash , N. Bouamor , H. Rozovskaya , A. Mohit , B. Heider , A. Oflazer , K. 2015 Correction annotation for non-native Arabic texts: Guidelines and corpus Proceedings of the Association for Computational Linguistics, Fourth Linguistic Annotation Workshop 129 139Zaidan , O. F. Callison-Burch , C 2011 The Arabic online commentary dataset: An annotated dataset of informal Arabic with high dialectal content Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers -Volume 2 Association for Computational Linguistics 37 41Zaidan, O. F., & Callison-Burch, C. (2014). Arabic Dialect Identification. Computational Linguistics, 40(1), 171-202. doi:10.1162/coli_a_00169Zampieri , M. Gebre , B. G. 2012 Automatic identification of language varieties: The case of PortugueseZampieri , M. Tan , L. Ljubesic , N. Tiedemann , J. 2014 A report on the DSL shared task 2014 Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects 58 67Zampieri , M. Tan , L. Ljubesic , N. Tiedemann , J. Nakov , P. 2015 Overview of the DSL shared task 2015 1Zbib , R. Malchiodi , E. Devlin , J. Stallard , D. Matsoukas , S. Schwartz , R. Makhoul , J. Zaidan , O. F. Callison Burch , C. 2012 Machine translation of Arabic dialects Proceedings of the 2012 conference of the North American chapter of the Association for Computational Linguistics: Human language technologies Association for Computational Linguistics 49 5
PersoNER: Persian named-entity recognition
© 1963-2018 ACL. Named-Entity Recognition (NER) is still a challenging task for languages with low digital resources. The main difficulties arise from the scarcity of annotated corpora and the consequent problematic training of an effective NER pipeline. To abridge this gap, in this paper we target the Persian language that is spoken by a population of over a hundred million people world-wide. We first present and provide ArmanPerosNERCorpus, the first manually-annotated Persian NER corpus. Then, we introduce PersoNER, an NER pipeline for Persian that leverages a word embedding and a sequential max-margin classifier. The experimental results show that the proposed approach is capable of achieving interesting MUC7 and CoNNL scores while outperforming two alternatives based on a CRF and a recurrent neural network
MoBiL: A hybrid feature set for Automatic Human Translation quality assessment
In this paper we introduce MoBiL, a hybrid Monolingual, Bilingual and Language modelling feature set and feature selection and evaluation framework. The set includes translation quality indicators that can be utilized to automatically predict the quality of human translations in terms of content adequacy and language fluency. We compare MoBiL with the QuEst baseline set by using them in classifiers trained with support vector machine and relevance vector machine learning algorithms on the same data set. We also report an experiment on feature selection to opt for fewer but more informative features from MoBiL. Our experiments show that classifiers trained on our feature set perform consistently better in predicting both adequacy and fluency than the classifiers trained on the baseline feature set. MoBiL also performs well when used with both support vector machine and relevance vector machine algorithms
Designing, implementing, and evaluating an automated writing evaluation tool for improving EFL graduate students’ abstract writing: a case in Taiwan
Writing English research article (RA) abstracts is a difficult but mandatory task for Taiwanese engineering graduate students (Feng, 2013). Understanding the current situation and needs of Taiwanese engineering graduate students, this dissertation aimed to develop and evaluate an automated writing evaluation (AWE) tool to assist their research article (RA) abstract writing in English by following a Design-Based Research (DBR) approach as the methodological framework. DBR was chosen because it strives to solve real-world problems through multiple iterations of development and building on results from each iteration to advance the project.
Six design iterations were undertaken to develop and to evaluate the AWE tool in this dissertation, including (1) corpus compilation of engineering RAs, (2) genre analysis of engineering abstracts, (3) machine learning of move classification in abstracts, (4) analysis of lexical bundles used to express moves, (5) analysis of the choice of verb categories associated with moves, and finally, (6) AWE tool development based on previous findings, classroom implementation, and evaluation of the AWE tool following Chapelle’s (2001) computer-assisted language learning (CALL) framework.
To begin with, I collected a corpus of 480 engineering RAs (Corpus-480) to extract appropriate linguistic properties as pedagogical materials to be implemented in the AWE tool. A sub-corpus (Corpus-72) was compiled with 72 RAs randomly chosen from Corpus-480 for manual and automated analyses. Next, to seek the best descriptive framework for the structure of engineering RA abstracts, two move schemata were compared: (1) IMRD (Introduction, Methodology, Results, and Discussion) and (2) CARS (Create-A-Research-Space, Swales, 1990). Abstracts in Corpus-72 were annotated and these two schemas were evaluated according to three quantitative metrics devised specifically for this comparison.
Applying a statistical natural language processing (StatNLP) approach, a Support Vector Machine (SVM) was trained for automated move classification in abstracts. Formulaic language in engineering RA sections was used as linguistic features to automatically classify moves in abstracts. Additionally, four-word lexical bundles and verb categories were identified from Corpus-480 and Corpus-72, respectively. Four-word lexical bundles associated with moves in abstracts were extracted automatically. Additionally, verb categories (i.e., tense, aspect, and voice) in moves of abstracts were identified using CyWrite::Analyzer, a hybrid (statistical and rule-based) NLP software.
Finally, the AWE tool was developed, based on the findings from the previous iterations, and implemented in an English-as-a-foreign-language (EFL) classroom setting. Through analyzing students’ drafts before and after using the tool, and responses to a questionnaire and a semi-structured interview, the AWE tool was evaluated based on Chapelle’s (2001) CALL evaluation framework. The findings showed that students attempted to improve their abstracts by adding, deleting, or changing the sequences of their sentences, lexical bundles, and verb categories in their abstracts. Their attitudes toward the effectiveness and appropriateness of the tool were quite positive. Overall, the AWE tool drew students’ attention to the use of lexical bundles and verb categories to achieve the communicative purposes of each move in their abstracts.
In conclusion, this dissertation started from Taiwanese engineering students’ needs to improve their English abstract writing, and attempted to develop and evaluate an AWE tool for assisting them. Following DBR, the findings from this dissertation are discussed to improve the next generation of the AWE tools. Having these iterations in place, future studies can focus on developing pedagogical materials from genre-based analysis in different disciplines to fulfill learners’ needs
- …