243,088 research outputs found

    Simplifying Deep-Learning-Based Model for Code Search

    Full text link
    To accelerate software development, developers frequently search and reuse existing code snippets from a large-scale codebase, e.g., GitHub. Over the years, researchers proposed many information retrieval (IR) based models for code search, which match keywords in query with code text. But they fail to connect the semantic gap between query and code. To conquer this challenge, Gu et al. proposed a deep-learning-based model named DeepCS. It jointly embeds method code and natural language description into a shared vector space, where methods related to a natural language query are retrieved according to their vector similarities. However, DeepCS' working process is complicated and time-consuming. To overcome this issue, we proposed a simplified model CodeMatcher that leverages the IR technique but maintains many features in DeepCS. Generally, CodeMatcher combines query keywords with the original order, performs a fuzzy search on name and body strings of methods, and returned the best-matched methods with the longer sequence of used keywords. We verified its effectiveness on a large-scale codebase with about 41k repositories. Experimental results showed the simplified model CodeMatcher outperforms DeepCS by 97% in terms of MRR (a widely used accuracy measure for code search), and it is over 66 times faster than DeepCS. Besides, comparing with the state-of-the-art IR-based model CodeHow, CodeMatcher also improves the MRR by 73%. We also observed that: fusing the advantages of IR-based and deep-learning-based models is promising because they compensate with each other by nature; improving the quality of method naming helps code search, since method name plays an important role in connecting query and code

    CodeTF: One-stop Transformer Library for State-of-the-art Code LLM

    Full text link
    Code intelligence plays a key role in transforming modern software engineering. Recently, deep learning-based models, especially Transformer-based large language models (LLMs), have demonstrated remarkable potential in tackling these tasks by leveraging massive open-source code data and programming language features. However, the development and deployment of such models often require expertise in both machine learning and software engineering, creating a barrier for the model adoption. In this paper, we present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence. Following the principles of modular design and extensible framework, we design CodeTF with a unified interface to enable rapid access and development across different types of models, datasets and tasks. Our library supports a collection of pretrained Code LLM models and popular code benchmarks, including a standardized interface to train and serve code LLMs efficiently, and data features such as language-specific parsers and utility functions for extracting code attributes. In this paper, we describe the design principles, the architecture, key modules and components, and compare with other related library tools. Finally, we hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering, providing a comprehensive open-source solution for developers, researchers, and practitioners.Comment: Ongoing work - Draft Previe

    Auto-Detection of Programming Code Vulnerabilities With Natural Language Processing

    Get PDF
    Security vulnerabilities in source code are traditionally detected manually by software developers because there are no effective auto-detection tools. Current vulnerability detection tools require great human effort, and the results have flaws in many ways. However, deep learning models could be a solution to this problem for the following reasons: 1. Deep learning models are relatively accurate for text classification and text summarization for source code. 2. After being deployed on the cloud servers, the efficiency of deep learning based auto-detection could be much higher than human effort. Therefore, we developed two Natural Language Processing(NLP) models: the first one is a text-classification model that takes source code as input and outputs the classification of the security vulnerability of the input. The second one is a text-to-text model that takes source code as input and outputs a completely machine-generated summary about the security vulnerability of the input. Our evaluation shows that both models get impressive results

    Artificial neural network model for automatic code generation in graphical interface applications

    Get PDF
    Introduction: Currently, the software development industry is living in its golden age due to the progress in areas related to machine learning, which is part of AI techniques. These advances have allowed tasks considered exclusively human to be solved using a computer. However, the complexity and the extensive area covered by new projects that must be developed using programming languages have slowed down project delivery times and affected the company's productivity. Objective: This research presents the methodology carried out for constructing a recurrent neural network model for the automatic generation of source code related to graphical user interfaces using Python programming language. Method: By constructing a natural language-related dataset for describing graphical interfaces programmed in Python, a deep neural network model is built to generate automatic source code. Results:  The trained model achieves loss and perplexity values of 1.57 and 4.82, respectively, in the validation stage, avoiding overfitting in the model's training. Conclusions: A neural network model is trained to process the natural language related to the request to create graphical interfaces using the Python programming language to automatically generate source code that can be executed through the Python interpreter.Introduction: Currently, the software development industry is living in its golden age due to the progress in areas related to machine learning, which is part of AI techniques. These advances have allowed tasks considered exclusively human to be solved using a computer. However, the complexity and the extensive area covered by new projects that must be developed using programming languages have slowed down project delivery times and affected the company's productivity. Objective: This research presents the methodology carried out for constructing a recurrent neural network model for the automatic generation of source code related to graphical user interfaces using Python programming language. Method: By constructing a natural language-related dataset for describing graphical interfaces programmed in Python, a deep neural network model is built to generate automatic source code. Results:  The trained model achieves loss and perplexity values of 1.57 and 4.82, respectively, in the validation stage, avoiding overfitting in the model's training. Conclusions: A neural network model is trained to process the natural language related to the request to create graphical interfaces using the Python programming language to automatically generate source code that can be executed through the Python interpreter

    Changeset-based Retrieval of Source Code Artifacts for Bug Localization

    Get PDF
    Modern software development is extremely collaborative and agile, with unprecedented speed and scale of activity. Popular trends like continuous delivery and continuous deployment aim at building, fixing, and releasing software with greater speed and frequency. Bug localization, which aims to automatically localize bug reports to relevant software artifacts, has the potential to improve software developer efficiency by reducing the time spent on debugging and examining code. To date, this problem has been primarily addressed by applying information retrieval techniques based on static code elements, which are intrinsically unable to reflect how software evolves over time. Furthermore, as prior approaches frequently rely on exact term matching to measure relatedness between a bug report and a software artifact, they are prone to be affected by the lexical gap that exists between natural and programming language. This thesis explores using software changes (i.e., changesets), instead of static code elements, as the primary data unit to construct an information retrieval model toward bug localization. Changesets, which represent the differences between two consecutive versions of the source code, provide a natural representation of a software change, and allow to capture both the semantics of the source code, and the semantics of the code modification. To bridge the lexical gap between source code and natural language, this thesis investigates using topic modeling and deep learning architectures that enable creating semantically rich data representation with the goal of identifying latent connection between bug reports and source code. To show the feasibility of the proposed approaches, this thesis also investigates practical aspects related to using a bug localization tool, such retrieval delay and training data availability. The results indicate that the proposed techniques effectively leverage historical data about bugs and their related source code components to improve retrieval accuracy, especially for bug reports that are expressed in natural language, with little to no explicit code references. Further improvement in accuracy is observed when the size of the training dataset is increased through data augmentation and data balancing strategies proposed in this thesis, although depending on the model architecture the magnitude of the improvement varies. In terms of retrieval delay, the results indicate that the proposed deep learning architecture significantly outperforms prior work, and scales up with respect to search space size

    Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related Tasks

    Full text link
    Deep learning (DL) techniques are gaining more and more attention in the software engineering community. They have been used to support several code-related tasks, such as automatic bug fixing and code comments generation. Recent studies in the Natural Language Processing (NLP) field have shown that the Text-To-Text Transfer Transformer (T5) architecture can achieve state-of-the-art performance for a variety of NLP tasks. The basic idea behind T5 is to first pre-train a model on a large and generic dataset using a self-supervised task ( e.g: filling masked words in sentences). Once the model is pre-trained, it is fine-tuned on smaller and specialized datasets, each one related to a specific task ( e.g: language translation, sentence classification). In this paper, we empirically investigate how the T5 model performs when pre-trained and fine-tuned to support code-related tasks. We pre-train a T5 model on a dataset composed of natural language English text and source code. Then, we fine-tune such a model by reusing datasets used in four previous works that used DL techniques to: (i) fix bugs, (ii) inject code mutants, (iii) generate assert statements, and (iv) generate code comments. We compared the performance of this single model with the results reported in the four original papers proposing DL-based solutions for those four tasks. We show that our T5 model, exploiting additional data for the self-supervised pre-training phase, can achieve performance improvements over the four baselines.Comment: Accepted to the 43rd International Conference on Software Engineering (ICSE 2021
    corecore