240,219 research outputs found

    Semantic model-driven development of web service architectures.

    Get PDF
    Building service-based architectures has become a major area of interest since the advent of Web services. Modelling these architectures is a central activity. Model-driven development is a recent approach to developing software systems based on the idea of making models the central artefacts for design representation, analysis, and code generation. We propose an ontology-based engineering methodology for semantic model-driven composition and transformation of Web service architectures. Ontology technology as a logic-based knowledge representation and reasoning framework can provide answers to the needs of sharable and reusable semantic models and descriptions needed for service engineering. Based on modelling, composition and code generation techniques for service architectures, our approach provides a methodological framework for ontology-based semantic service architecture

    On the Effect of Semantically Enriched Context Models on Software Modularization

    Full text link
    Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis

    Semantic code search and analysis

    Get PDF
    Title from PDF of title page, viewed on July 28, 2014Thesis advisor: Yugyung LeeVitaIncludes bibliographical references (pages 33-35)Thesis (M. S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2014As open source software repositories have been enormously growing, the high quality source codes have been widely available. A greater access to open source software also leads to an increase of software quality and reduces the overhead of software development. However, most of the available search engines are limited to lexical or code based searches and do not take semantics that underlie the source codes. Thus, object oriented (OO) principles, such as inheritance and composition, cannot be efficiently utilized for code search or analysis. This thesis proposes a novel approach for searching source code using semantics and structure. This approach will allow users to analyze software systems in terms of code similarity. For this purpose, a semantic measurement, called CoSim, was designed based on OO programing models including Package, Class, Method and Interface. We accessed and extracted the source code from open source repositories like Github and converted them into Resource Description Framework (RDF) model. Using the measurement, we queried the source code with SPARQL Query Language and analyzed the systems. We carried out a pilot study for preliminary evaluation of seven different versions of Apache Hadoop systems in terms of their similarities. In addition, we compared the search outputs from our system with those by the Github Code Search. It was shown that our search engine provided more comprehensive and relevant information than the Github does. In addition, the proposed CoSim measurement precisely reflected the significant and evolutionary properties of the systems in the similarity comparison of Hadoop software systemsAbstract -- Illustrations -- Tables - Introduction -- Background and related work -- Semantic code search and analysis model -- Semantic code search and analysis implementation -- Results and evaluation -- Conclusion and future work -- Reference

    A Semantic Analysis Method for Scientific and Engineering Code

    Get PDF
    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes

    Probing Semantic Grounding in Language Models of Code with Representational Similarity Analysis

    Full text link
    Representational Similarity Analysis is a method from cognitive neuroscience, which helps in comparing representations from two different sources of data. In this paper, we propose using Representational Similarity Analysis to probe the semantic grounding in language models of code. We probe representations from the CodeBERT model for semantic grounding by using the data from the IBM CodeNet dataset. Through our experiments, we show that current pre-training methods do not induce semantic grounding in language models of code, and instead focus on optimizing form-based patterns. We also show that even a little amount of fine-tuning on semantically relevant tasks increases the semantic grounding in CodeBERT significantly. Our ablations with the input modality to the CodeBERT model show that using bimodal inputs (code and natural language) over unimodal inputs (only code) gives better semantic grounding and sample efficiency during semantic fine-tuning. Finally, our experiments with semantic perturbations in code reveal that CodeBERT is able to robustly distinguish between semantically correct and incorrect code.Comment: Under review at ADMA 202

    An approach to source-code plagiarism detection investigation using latent semantic analysis

    Get PDF
    This thesis looks at three aspects of source-code plagiarism. The first aspect of the thesis is concerned with creating a definition of source-code plagiarism; the second aspect is concerned with describing the findings gathered from investigating the Latent Semantic Analysis information retrieval algorithm for source-code similarity detection; and the final aspect of the thesis is concerned with the proposal and evaluation of a new algorithm that combines Latent Semantic Analysis with plagiarism detection tools. A recent review of the literature revealed that there is no commonly agreed definition of what constitutes source-code plagiarism in the context of student assignments. This thesis first analyses the findings from a survey carried out to gather an insight into the perspectives of UK Higher Education academics who teach programming on computing courses. Based on the survey findings, a detailed definition of source-code plagiarism is proposed. Secondly, the thesis investigates the application of an information retrieval technique, Latent Semantic Analysis, to derive semantic information from source-code files. Various parameters drive the effectiveness of Latent Semantic Analysis. The performance of Latent Semantic Analysis using various parameter settings and its effectiveness in retrieving similar source-code files when optimising those parameters are evaluated. Finally, an algorithm for combining Latent Semantic Analysis with plagiarism detection tools is proposed and a tool is created and evaluated. The proposed tool, PlaGate, is a hybrid model that allows for the integration of Latent Semantic Analysis with plagiarism detection tools in order to enhance plagiarism detection. In addition, PlaGate has a facility for investigating the importance of source-code fragments with regards to their contribution towards proving plagiarism. PlaGate provides graphical output that indicates the clusters of suspicious files and source-code fragments
    corecore