10 research outputs found

    An exploratory research on grammar checking of Bangla sentences using statistical language models

    Get PDF
    N-gram based language models are very popular and extensively used statistical methods for solving various natural language processing problems including grammar checking. Smoothing is one of the most effective techniques used in building a language model to deal with data sparsity problem. Kneser-Ney is one of the most prominently used and successful smoothing technique for language modelling. In our previous work, we presented a Witten-Bell smoothing based language modelling technique for checking grammatical correctness of Bangla sentences which showed promising results outperforming previous methods. In this work, we proposed an improved method using Kneser-Ney smoothing based n-gram language model for grammar checking and performed a comparative performance analysis between Kneser-Ney and Witten-Bell smoothing techniques for the same purpose. We also provided an improved technique for calculating the optimum threshold which further enhanced the the results. Our experimental results show that, Kneser-Ney outperforms Witten-Bell as a smoothing technique when used with n-gram LMs for checking grammatical correctness of Bangla sentences

    Mii *eai leat gal vuollánan – Vi *ha neimen ikke gitt opp

    Get PDF
    Machine learning is the dominating paradigm in natural language processing nowadays. It requires vast amounts of manually annotated or synthetically generated text data. In the GiellaLT infrastructure, on the other hand, we have worked with rule-based methods, where the linguistis have full control over the development the tools. In this article we uncover the myth of machine learning being cheaper than a rule-based approach by showing how much work there is behind data generation, either via corpus annotation or creating tools that automatically mark-up the corpus. Earlier we have shown that the correction of grammatical errors, in particular compound errors, benefit from hybrid methods. Agreement errors, on the other other hand, are to a higher degree dependent on the larger grammatical context. Our experiments show that machine learning methods for this error type, even when supplemented by rule-based methods generating massive data, can not compete with the state-of-the-art rule-based approach

    Mii *eai leat gal vuollánan -- Vi *ha neimen ikke gitt opp: En hybrid grammatikkontroll for å rette kongruensfeil

    Get PDF
    Machine learning is the dominating paradigm in natural language processing nowadays. It requires vast amounts of manually annotated or synthetically generated text data. In the GiellaLT infrastructure, on the other hand, we have worked with rule-based methods, where the linguistis have full control over the development the tools. In this article we uncover the myth of machine learning being cheaper than a rule-based approach by showing how much work there is behind data generation, either via corpus annotation or creating tools that automatically mark-up the corpus. Earlier we have shown that the correction of grammatical errors, in particular compound errors, benefit from hybrid methods. Agreement errors, on the other other hand, are to a higher degree dependent on the larger grammatical context. Our experiments show that machine learning methods for this error type, even when supplemented by rule-based methods generating massive data, can not compete with the state-of-the-art rule-based approach.Maskinlæringsteknikker der lingvistisk ekspertise ikke brukes dominerer språkteknologi nå til dags. Dette krever at man merker opp en stor datamengde manuelt på forhånd. I GiellaLT-infrastrukturen har man der- imot jobbet med regelbaserte metoder der lingvisten har kontroll over hvordan verktøyene fungerer. Det er ikke bare tekniske årsaker for metodevalget. Kunnskapsøkning om samisk grammatikk, kvalitetssikring og kontrollerbarhet (verktøyene gjør det de skal gjøre også ifølge menneskelige standard) ligger bak preferansen om å jobbe regelbasert. I denne artikkelen vil vi forsøke å avdekke myten om at maskinlæring blir billigere enn regelbaserte metoder. Likevel tror vi at maskinlæringsmetoder kan være nyttige der vi ønsker større dekning av feilretting. Vi viser at maskinlæringsmodeller som har tilgang til små datameng- der (i dette tilfelle for små språk) er avhengig av gode regelbaserte verktøy som erstatning for manuell oppmerking

    Kesksete lausekomponentide järjestus õppijakeeles: arvutianalüüsi katse

    Full text link

    Erroreak automatikoki detektatzeko tekniken azterlana eta euskararentzako aplikazioak

    Get PDF
    In this article, we study the techniques used for detecting errors in Natural Language Processing (NLP). We classify the techniques according to their approach (symbolic or empirical), and then we describe them in depth. Following that, we describe the systems we have developed for detecting syntactic errors in Basque, by using that technique as a criterion for the classification of those systems, and enhancing it with examples

    Nodalida 2005 - proceedings of the 15th NODALIDA conference

    Get PDF

    DEVELOPING A GRAMMAR CHECKER FOR SWEDISH 1

    No full text
    A grammar checker for Swedish, launched on the market as Grammatifix, has been developed at Lingsoft in 1997-1999. This paper gives first a brief background of grammar checking projects for the Nordic languages, with an emphasis on Swedish. Then, the concept and definition of a grammar checker in general is discussed, followed by an overview of the starting points and limitations that Lingsoft had in setting up the Grammatifix development project. After this, the initial product development process is described, leading to an overview of the error types covered presently by Grammatifix. The error treatment scheme in Grammatifix is presented, with a focus on its relationship with the error detection rules. Finally, the error types included in Grammatifix are compared to those of two other known projects, namely SCARRIE and Granska. 1
    corecore