794 research outputs found
Analysis and Detection of Information Types of Open Source Software Issue Discussions
Most modern Issue Tracking Systems (ITSs) for open source software (OSS)
projects allow users to add comments to issues. Over time, these comments
accumulate into discussion threads embedded with rich information about the
software project, which can potentially satisfy the diverse needs of OSS
stakeholders. However, discovering and retrieving relevant information from the
discussion threads is a challenging task, especially when the discussions are
lengthy and the number of issues in ITSs are vast. In this paper, we address
this challenge by identifying the information types presented in OSS issue
discussions. Through qualitative content analysis of 15 complex issue threads
across three projects hosted on GitHub, we uncovered 16 information types and
created a labeled corpus containing 4656 sentences. Our investigation of
supervised, automated classification techniques indicated that, when prior
knowledge about the issue is available, Random Forest can effectively detect
most sentence types using conversational features such as the sentence length
and its position. When classifying sentences from new issues, Logistic
Regression can yield satisfactory performance using textual features for
certain information types, while falling short on others. Our work represents a
nontrivial first step towards tools and techniques for identifying and
obtaining the rich information recorded in the ITSs to support various software
engineering activities and to satisfy the diverse needs of OSS stakeholders.Comment: 41st ACM/IEEE International Conference on Software Engineering
(ICSE2019
Recommended from our members
A Large-Scale Study of Modern Code Review and Security in Open Source Projects.
Towards understanding the challenges faced by machine learning software developers and enabling automated solutions
Modern software systems are increasingly including machine learning (ML) as an integral component. However, we do not yet understand the difficulties faced by software developers when learning about ML libraries and using them within their systems. To fill that gap this thesis reports on a detailed (manual) examination of 3,243 highly-rated Q&A posts related to ten ML libraries, namely Tensorflow, Keras, scikitlearn, Weka, Caffe, Theano, MLlib, Torch, Mahout, and H2O, on Stack Overflow, a popular online technical Q&A forum. Our findings reveal the urgent need for software engineering (SE) research in this area. The second part of the thesis particularly focuses on understanding the Deep Neural Network (DNN) bug characteristics. We study 2,716 high-quality posts from Stack Overflow and 500 bug fix commits from Github about five popular deep learning libraries Caffe, Keras, Tensorflow, Theano, and Torch to understand the types of bugs, their root causes and impacts, bug-prone stage of deep learning pipeline as well as whether there are some common antipatterns found in this buggy software. While exploring the bug characteristics, our findings imply that repairing software that uses DNNs is one such unmistakable SE need where automated tools could be beneficial; however, we do not fully understand challenges to repairing and patterns that are utilized when manually repairing DNNs. So, the third part of this thesis presents a comprehensive study of bug fix patterns to address these questions. We have studied 415 repairs from Stack Overflow and 555 repairs from Github for five popular deep learning libraries Caffe, Keras, Tensorflow, Theano, and Torch to understand challenges in repairs and bug repair patterns. Our key findings reveal that DNN bug fix patterns are distinctive compared to traditional bug fix patterns and the most common bug fix patterns are fixing data dimension and neural network connectivity. Finally, we propose an automatic technique to detect ML Application Programming Interface (API) misuses. We started with an empirical study to understand ML API misuses. Our study shows that ML API misuse is prevalent and distinct compared to non-ML API misuses. Inspired by these findings, we contributed Amimla (Api Misuse In Machine Learning Apis) an approach and a tool for ML API misuse detection. Amimla relies on several technical innovations. First, we proposed an abstract representation of ML pipelines to use in misuse detection. Second, we proposed an abstract representation of neural networks for deep learning related APIs. Third, we have developed a representation strategy for constraints on ML APIs. Finally, we have developed a misuse detection strategy for both single and multi-APIs. Our experimental evaluation shows that Amimla achieves a high average accuracy of ∼80% on two benchmarks of misuses from Stack Overflow and Github
Automatic Static Bug Detection for Machine Learning Libraries: Are We There Yet?
Automatic detection of software bugs is a critical task in software security.
Many static tools that can help detect bugs have been proposed. While these
static bug detectors are mainly evaluated on general software projects call
into question their practical effectiveness and usefulness for machine learning
libraries. In this paper, we address this question by analyzing five popular
and widely used static bug detectors, i.e., Flawfinder, RATS, Cppcheck,
Facebook Infer, and Clang static analyzer on a curated dataset of software bugs
gathered from four popular machine learning libraries including Mlpack, MXNet,
PyTorch, and TensorFlow with a total of 410 known bugs. Our research provides a
categorization of these tools' capabilities to better understand the strengths
and weaknesses of the tools for detecting software bugs in machine learning
libraries. Overall, our study shows that static bug detectors find a negligible
amount of all bugs accounting for 6/410 bugs (0.01%), Flawfinder and RATS are
the most effective static checker for finding software bugs in machine learning
libraries. Based on our observations, we further identify and discuss
opportunities to make the tools more effective and practical
Demystifying Dependency Bugs in Deep Learning Stack
Deep learning (DL) applications, built upon a heterogeneous and complex DL
stack (e.g., Nvidia GPU, Linux, CUDA driver, Python runtime, and TensorFlow),
are subject to software and hardware dependencies across the DL stack. One
challenge in dependency management across the entire engineering lifecycle is
posed by the asynchronous and radical evolution and the complex version
constraints among dependencies. Developers may introduce dependency bugs (DBs)
in selecting, using and maintaining dependencies. However, the characteristics
of DBs in DL stack is still under-investigated, hindering practical solutions
to dependency management in DL stack. To bridge this gap, this paper presents
the first comprehensive study to characterize symptoms, root causes and fix
patterns of DBs across the whole DL stack with 446 DBs collected from
StackOverflow posts and GitHub issues. For each DB, we first investigate the
symptom as well as the lifecycle stage and dependency where the symptom is
exposed. Then, we analyze the root cause as well as the lifecycle stage and
dependency where the root cause is introduced. Finally, we explore the fix
pattern and the knowledge sources that are used to fix it. Our findings from
this study shed light on practical implications on dependency management
- …