3,888 research outputs found

    Measuring Code Similarity in Large-scaled Code Corpora

    Get PDF
    Source code similarity measurement is a fundamental technique in software engineering research. Techniques to measure code similarity have been invented and applied to various research areas such as code clone detection, finding bug fixes, and software plagiarism detection. We perform an evaluation of 30 similarity analysers for source code. The results show that specialised tools including clone and plagiarism detectors, with proper parameter tuning, outperform general techniques such as string matching. Although these specialised tools can handle code similarity in local code bases, they fail to locate similar code artefacts from large-scaled corpora. This is increasingly important considering the rising amount of online code artefacts. We propose a scalable search system specifically designed for source code. It lays a foundation to discovering online code reuse, large-scale code clone detection, finding usage examples, detecting software plagiarism, and finding software licensing conflicts. Our proposed code search framework is a hybrid of information retrieval and code clone detection techniques. This framework will be able to locate similar code artefacts instantly. The search is not only based on textual similarity, but also syntactic and structural similarity. It is resilient to incomplete code fragments that are normally found on the Internet

    Normalized Web Distance and Word Similarity

    Get PDF
    There is a great deal of work in cognitive psychology, linguistics, and computer science, about using word (or phrase) frequencies in context in text corpora to develop measures for word similarity or word association, going back to at least the 1960s. The goal of this chapter is to introduce the normalizedis a general way to tap the amorphous low-grade knowledge available for free on the Internet, typed in by local users aiming at personal gratification of diverse objectives, and yet globally achieving what is effectively the largest semantic electronic database in the world. Moreover, this database is available for all by using any search engine that can return aggregate page-count estimates for a large range of search-queries. In the paper introducing the NWD it was called `normalized Google distance (NGD),' but since Google doesn't allow computer searches anymore, we opt for the more neutral and descriptive NWD. web distance (NWD) method to determine similarity between words and phrases. ItComment: Latex, 20 pages, 7 figures, to appear in: Handbook of Natural Language Processing, Second Edition, Nitin Indurkhya and Fred J. Damerau Eds., CRC Press, Taylor and Francis Group, Boca Raton, FL, 2010, ISBN 978-142008592

    The Document Similarity Network: A Novel Technique for Visualizing Relationships in Text Corpora

    Get PDF
    With the abundance of written information available online, it is useful to be able to automatically synthesize and extract meaningful information from text corpora. We present a unique method for visualizing relationships between documents in a text corpus. By using Latent Dirichlet Allocation to extract topics from the corpus, we create a graph whose nodes represent individual documents and whose edge weights indicate the distance between topic distributions in documents. These edge lengths are then scaled using multidimensional scaling techniques, such that more similar documents are clustered together. Applying this method to several datasets, we demonstrate that these graphs are useful in visually representing high-dimensional document clustering in topic-space

    Substitution-based Semantic Change Detection using Contextual Embeddings

    Full text link
    Measuring semantic change has thus far remained a task where methods using contextual embeddings have struggled to improve upon simpler techniques relying only on static word vectors. Moreover, many of the previously proposed approaches suffer from downsides related to scalability and ease of interpretation. We present a simplified approach to measuring semantic change using contextual embeddings, relying only on the most probable substitutes for masked terms. Not only is this approach directly interpretable, it is also far more efficient in terms of storage, achieves superior average performance across the most frequently cited datasets for this task, and allows for more nuanced investigation of change than is possible with static word vectors
    • …
    corecore