3,095 research outputs found

    Computational Approaches to Measuring the Similarity of Short Contexts : A Review of Applications and Methods

    Full text link
    Measuring the similarity of short written contexts is a fundamental problem in Natural Language Processing. This article provides a unifying framework by which short context problems can be categorized both by their intended application and proposed solution. The goal is to show that various problems and methodologies that appear quite different on the surface are in fact very closely related. The axes by which these categorizations are made include the format of the contexts (headed versus headless), the way in which the contexts are to be measured (first-order versus second-order similarity), and the information used to represent the features in the contexts (micro versus macro views). The unifying thread that binds together many short context applications and methods is the fact that similarity decisions must be made between contexts that share few (if any) words in common.Comment: 23 page

    Closing the loop: assisting archival appraisal and information retrieval in one sweep

    Get PDF
    In this article, we examine the similarities between the concept of appraisal, a process that takes place within the archives, and the concept of relevance judgement, a process fundamental to the evaluation of information retrieval systems. More specifically, we revisit selection criteria proposed as result of archival research, and work within the digital curation communities, and, compare them to relevance criteria as discussed within information retrieval's literature based discovery. We illustrate how closely these criteria relate to each other and discuss how understanding the relationships between the these disciplines could form a basis for proposing automated selection for archival processes and initiating multi-objective learning with respect to information retrieval

    Who wrote this scientific text?

    No full text
    The IEEE bibliographic database contains a number of proven duplications with indication of the original paper(s) copied. This corpus is used to test a method for the detection of hidden intertextuality (commonly named "plagiarism"). The intertextual distance, combined with the sliding window and with various classification techniques, identifies these duplications with a very low risk of error. These experiments also show that several factors blur the identity of the scientific author, including variable group authorship and the high levels of intertextuality accepted, and sometimes desired, in scientific papers on the same topic

    Detecting translingual plagiarism and the backlash against translation plagiarists

    Get PDF
    Os métodos de detecção de plágio registaram melhorias significativas ao longo das últimas décadas e, decorrente da investigação avançada realizada por linguistas computacionais e, sobretudo, por linguistas forenses, é, agora, maisfácil identiVcar estratégias de reutilização de texto simples e soVsticadas. Especificamente, simples algoritmos de comparação de texto criados por linguistas computacionais permitem detectar fácil e (semi-)automaticamente plágio literal,ipsis verbis (i.e. que consiste na reutilização de trechos de texto idênticos em diferentes documentos) como é o caso do Turnitin ou o SafeAssign , embora o desempenho destes métodos tenha tendência a piorar quando a reutilizaçãoé disfarçada através da introdução de alterações ao texto original. Neste caso, são necessárias técnicas linguísticas mais soVsticadas, como a análise de sobreposição lexical (Johnson, 1997), para detectar a reutilização. Contudo, estastécnicas são de aplicação muito limitada em casos de plágio translingue, em que determinado texto é traduzido e reutilizado sem atribuição da autoria ao texto original, proveniente de outra língua. Considerando que (a) normalmente,a tradução amadora (e.g. tradução literal ou tradução automática gratuita) é ométodo utilizado para plagiar; (b) é comum os plagiadores fazerem alterações aotexto, nomeadamente gramaticais e sintácticas, sobretudo após a tradução automática;e (c) os elementos lexicais são aqueles que a tradução automática processamais correctamente, antes da sua reutilização no texto derivado, este artigopropõe um método de detecção de plágio translingue informado pelas teorias datradução e da interlíngua (Selinker, 1972; Bassnett and Lefevere, 1998), bem comopelo princípio de singularidade linguística (Coulthard, 2004). Recorrendo a dadosempíricos do corpus CorRUPT (Corpus of Reused and Plagiarised Texts),um corpus de textos académicos e não académicos reais, que foram investigadose acusados de plagiar textos originais noutras línguas, demonstra-se a utilidadeda metodologia proposta para a detecção de plágio translingue. Finalmente,discute-se possíveis aplicações deste método como ferramenta de investigação emcontextos forenses.Plagiarism detection methods have improved signiVcantly over thelast decades, and as a result of the advanced research conducted by computationaland mostly forensic linguists, simple and sophisticated textual borrowingstrategies can now be identiVed more easily. In particular, simple text comparisonalgorithms developed by computational linguists allow literal, word-for-wordplagiarism (i.e. where identical strings of text are reused across diUerent documents)to be easily detected (semi-)automatically (e.g. Turnitin or SafeAssign),although these methods tend to perform less well when the borrowing is obfuscatedby introducing edits to the original text. In this case, more sophisticatedlinguistic techniques, such as an analysis of lexical overlap (Johnson, 1997), arerequired to detect the borrowing. However, these have limited applicability incases of translingual plagiarism, where a text is translated and borrowed withoutacknowledgment from an original in another language. Considering that(a) traditionally non-professional translation (e.g. literal or free machine translation)is the method used to plagiarise; (b) the plagiarist usually edits the textfor grammar and syntax, especially when machine-translated; and (c) lexicalitems are those that tend to be translated more correctly, and carried over to thederivative text, this paper proposes a method for translingual plagiarism detectionthat is grounded on translation and interlanguage theories (Selinker, 1972;Bassnett and Lefevere, 1998), as well as on the principle of linguistic uniqueness(Coulthard, 2004). Empirical evidence from the CorRUPT corpus (Corpus ofReused and Plagiarised Texts), a corpus of real academic and non-academic textsthat were investigated and accused of plagiarising originals in other languages, isused to illustrate the applicability of the methodology proposed for translingualplagiarism detection. Finally, applications of the method as an investigative toolin forensic contexts are discussed

    The Imitation Game: Detecting Human and AI-Generated Texts in the Era of Large Language Models

    Full text link
    The potential of artificial intelligence (AI)-based large language models (LLMs) holds considerable promise in revolutionizing education, research, and practice. However, distinguishing between human-written and AI-generated text has become a significant task. This paper presents a comparative study, introducing a novel dataset of human-written and LLM-generated texts in different genres: essays, stories, poetry, and Python code. We employ several machine learning models to classify the texts. Results demonstrate the efficacy of these models in discerning between human and AI-generated text, despite the dataset's limited sample size. However, the task becomes more challenging when classifying GPT-generated text, particularly in story writing. The results indicate that the models exhibit superior performance in binary classification tasks, such as distinguishing human-generated text from a specific LLM, compared to the more complex multiclass tasks that involve discerning among human-generated and multiple LLMs. Our findings provide insightful implications for AI text detection while our dataset paves the way for future research in this evolving area
    corecore