140 research outputs found

    Understanding Eye Gaze Patterns in Code Comprehension

    Get PDF
    Program comprehension is a sub-field of software engineering that seeks to understand how developers understand programs. Comprehension acts as a starting point for many software engineering tasks such as bug fixing, refactoring, and feature creation. The dissertation presents a series of empirical studies to understand how developers comprehend software in realistic settings. The unique aspect of this work is the use of eye tracking equipment to gather fine-grained detailed information of what developers look at in software artifacts while they perform realistic tasks in an environment familiar to them, namely a context including both the Integrated Development Environment (Eclipse or Visual Studio) and a web browser (Google Chrome). The iTrace eye tracking infrastructure is used for certain eye tracking studies on large code files as it is able to handle page scrolling and context switching. The first study is a classroom-based study on how students actively trained in the classroom understand grouped units of C++ code. Results indicate students made many transitions between lines that were closer together, and were attracted the most to if statements and to a lesser extent assignment code. The second study seeks to understand how developers use Stack Overflow page elements to build summaries of open source project code. Results indicate participants focused more heavily on question and answer text, and the embedded code, more than they did the title, question tags, or votes. The third study presents a larger code summarization study using different information contexts: Stack Overflow, bug repositories and source code. Results show participants tended to visit up to two codebase files in either the combined or isolated codebase session, but visit more bug report pages, and spend longer time on new Stack Overflow pages they visited, when given either these two treatments in isolation. In the combined session, time spent on the one or two codebase files they viewed dominated the session time. Information learned from tracking developers\u27 gaze in these studies can form foundations for developer behavior models, which we hope can later inform recommendations for actions one might take to achieve workflow goals in these settings. Advisor: Bonita Shari

    AN EYE TRACKING REPLICATION STUDY OF A RANDOMIZED CONTROLLED TRIAL ON THE EFFECTS OF EMBEDDED COMPUTER LANGUAGE SWITCHING

    Get PDF
    The use of multiple programming languages (polyglot programming) during software development is common practice in modern software development. However, not much is known about how the use of these different languages affects developer productivity. The study presented in this thesis replicates a randomized controlled trial that investigates the use of multiple languages in the context of database programming tasks. Participants in our study were given coding tasks written in Java and one of three SQL-like embedded languages: plain SQL in strings, Java methods only, a hybrid embedded language that was more similar to Java. In addition to recording the online questionnaire responses and the participants\u27 solutions to the tasks, the participants\u27 eye movements were also recorded using an eye tracker. Eye tracking as a method for software development studies has grown in recent years and allows for finer-grain information about how developers complete programming tasks. Eye tracking data was collected from 31 participants (from both academia and industry) for each of the six programming tasks they completed. Unlike the original study, we were unable to find a significant effect on productivity due to the language used or whether they were a native English speaker. However, we did find the same effect of participant experience on programming productivity which indicates that more experienced programmers are able to complete polyglot programming tasks in a more efficient manner. We also found that all participants looked at the sample code the same percentage of the time for a given task regardless of their experience or language variant they were given. The top level navigation behavior also remained largely unchanged across experience or language variants. We found that professionals performed more transitions in the code between the Java code and method parameters than their novice counterparts. Overall, we found that the level of polyglot programming did not have as significant of an effect as the task itself. The high-level strategy that participants employed appeared similar regardless of language variant they were given. Adviser: Bonita Shari

    Representational Learning Approach for Predicting Developer Expertise Using Eye Movements

    Get PDF
    The thesis analyzes an existing eye-tracking dataset collected while software developers were solving bug fixing tasks in an open-source system. The analysis is performed using a representational learning approach namely, Multi-layer Perceptron (MLP). The novel aspect of the analysis is the introduction of a new feature engineering method based on the eye-tracking data. This is then used to predict developer expertise on the data. The dataset used in this thesis is inherently more complex because it is collected in a very dynamic environment i.e., the Eclipse IDE using an eye-tracking plugin, iTrace. Previous work in this area only worked on short code snippets that do not represent how developers usually program in a realistic setting. A comparative analysis between representational learning and non-representational learning (Support Vector Machine, Naive Bayes, Decision Tree, and Random Forest) is also presented. The results are obtained from an extensive set of experiments (with an 80/20 training and testing split) which show that representational learning (MLP) works well on our dataset reporting an average higher accuracy of 30% more for all tasks. Furthermore, a state-of-the-art method for feature engineering is proposed to extract features from the eye-tracking data. The average accuracy on all the tasks is 93.4% with a recall of 78.8% and an F1 score of 81.6%. We discuss the implications of these results on the future of automated prediction of developer expertise. Adviser: Bonita Shari

    Towards Next Generation Bug Tracking Systems

    Get PDF
    Although bug tracking systems are fundamental to support virtually any software development process, they are currently suboptimal to support the needs and complexities of large communities. This dissertation first presents a study showing empirical evidence that the traditional interface used by current bug tracking systems invites much noise—unreliable, unuseful, and disorganized information—into the ecosystem. We find that noise comes from, not only low-quality contributions posted by inexperienced users or from conflicts that naturally arise in such ecosystems, but also from the difficulty of fitting the complex bug resolution process and knowledge into the linear sequence of comments that current bug tracking systems use to collect and organize information. Since productivity in bug tracking systems relies on bug reports with accessible and realible information, this leaves contributors struggling to work on and to make sense of the dumps of data submitted to bug reports and, thus, impacting productivity. Next generation bug tracking systems should be more than a tool for exchanging unstructured textual comments. They should be an ecosystem that is tailored for collaborative knowledge building, leveraging the power of the masses to collect reliable and useful information about bugs, providing mechanisms and incentives to verify the validity of such information and mechanisms to organize such information, thus, facilitating comprehension and reasoning. To bring bug tracking systems towards this vision, we present three orthogonal approaches aiming at increasing the usefulness and realiability of contributions and organizing information to improve understanding and reasoning. To improve the usefulness and realibility of contributions we propose the addition of game mechanisms to bug tracking systems, with the objective of motivating contributors to post higher-quality content. Through an empirical investigation of Stack Overflow we evaluate the effects of the mechanisms in such a collaborative software development ecosystem and map a promissing approach to use game mechanisms in bug tracking systems. To improve data organization, we propose two complementary approaches. The first is an automated approach to data organization, creating bug report summaries that make reading and working with bug reports easier, by highlighting the portions of bug reports that expert developers would focus on, if reading the bug report in a hurry. The second approach to improve data organization is a fundamental change on how data is collected and organized, eliminating comments as the main component of bug reports. Instead of comments, users contribute informational posts about bug diagnostics or solutions, allowing users to post contextual comments for each of the different diagnostic iiior solution posts. Our evaluations with real bug tracking system users find that they consider the bug report summaries to be very useful in facilitating common bug tracking system tasks, such as finding duplicate bug reports. In addition, users found that organzing content though diagnostic and solution posts to significanly facilitate reasoning about and searching for relevant information. Finally, we present future directions of work investigating how next generation bug tracking systems could combine the use of the three approaches, such that they benefit from and build upon the results of the other approaches. Next generation bug tracking systems should be more than a tool for exchanging unstructured textual comments. They should be an ecosystem that is tailored for collaborative knowledge building, leveraging the power of the masses to collect reliable and useful information about bugs, providing mechanisms and incentives to verify the validity of such information and mechanisms to organize such information, thus, facilitating comprehension and reasoning. To bring bug tracking systems towards this vision, we present three orthogonal approaches aiming at increasing the usefulness and realiability of contributions and organizing information to improve understanding and reasoning. To improve the usefulness and realibility of contributions we propose the addition of game mechanisms to bug tracking systems, with the objective of motivating contributors to post higher-quality content. Through an empirical investigation of Stack Overflow we evaluate the effects of the mechanisms in such a collaborative software development ecosystem and map a promissing approach to use game mechanisms in bug tracking systems. To improve data organization, we propose two complementary approaches. The first is an automated approach to data organization, creating bug report summaries that make reading and working with bug reports easier, by highlighting the portions of bug reports that expert developers would focus on, if reading the bug report in a hurry. The second approach to improve data organization is a fundamental change on how data is collected and organized, eliminating comments as the main component of bug reports. Instead of comments, users contribute informational posts about bug diagnostics or solutions, allowing users to post contextual comments for each of the different diagnostic iiior solution posts. Our evaluations with real bug tracking system users find that they consider the bug report summaries to be very useful in facilitating common bug tracking system tasks, such as finding duplicate bug reports. In addition, users found that organzing content though diagnostic and solution posts to significanly facilitate reasoning about and searching for relevant information. Finally, we present future directions of work investigating how next generation bug tracking systems could combine the use of the three approaches, such that they benefit from and build upon the results of the other approaches

    Assessing Comment Quality in Object-Oriented Languages

    Get PDF
    Previous studies have shown that high-quality code comments support developers in software maintenance and program comprehension tasks. However, the semi-structured nature of comments, several conventions to write comments, and the lack of quality assessment tools for all aspects of comments make comment evaluation and maintenance a non-trivial problem. To understand the specification of high-quality comments to build effective assessment tools, our thesis emphasizes acquiring a multi-perspective view of the comments, which can be approached by analyzing (1) the academic support for comment quality assessment, (2) developer commenting practices across languages, and (3) developer concerns about comments. Our findings regarding the academic support for assessing comment quality showed that researchers primarily focus on Java in the last decade even though the trend of using polyglot environments in software projects is increasing. Similarly, the trend of analyzing specific types of code comments (method comments, or inline comments) is increasing, but the studies rarely analyze class comments. We found 21 quality attributes that researchers consider to assess comment quality, and manual assessment is still the most commonly used technique to assess various quality attributes. Our analysis of developer commenting practices showed that developers embed a mixed level of details in class comments, ranging from high-level class overviews to low-level implementation details across programming languages. They follow style guidelines regarding what information to write in class comments but violate the structure and syntax guidelines. They primarily face problems locating relevant guidelines to write consistent and informative comments, verifying the adherence of their comments to the guidelines, and evaluating the overall state of comment quality. To help researchers and developers in building comment quality assessment tools, we contribute: (i) a systematic literature review (SLR) of ten years (2010–2020) of research on assessing comment quality, (ii) a taxonomy of quality attributes used to assess comment quality, (iii) an empirically validated taxonomy of class comment information types from three programming languages, (iv) a multi-programming-language approach to automatically identify the comment information types, (v) an empirically validated taxonomy of comment convention-related questions and recommendation from various Q&A forums, and (vi) a tool to gather discussions from multiple developer sources, such as Stack Overflow, and mailing lists. Our contributions provide various kinds of empirical evidence of the developer’s interest in reducing efforts in the software documentation process, of the limited support developers get in automatically assessing comment quality, and of the challenges they face in writing high-quality comments. This work lays the foundation for future effective comment quality assessment tools and techniques

    Assessing Comment Quality in Object-Oriented Languages

    Get PDF
    Previous studies have shown that high-quality code comments support developers in software maintenance and program comprehension tasks. However, the semi-structured nature of comments, several conventions to write comments, and the lack of quality assessment tools for all aspects of comments make comment evaluation and maintenance a non-trivial problem. To understand the specification of high-quality comments to build effective assessment tools, our thesis emphasizes acquiring a multi-perspective view of the comments, which can be approached by analyzing (1) the academic support for comment quality assessment, (2) developer commenting practices across languages, and (3) developer concerns about comments. Our findings regarding the academic support for assessing comment quality showed that researchers primarily focus on Java in the last decade even though the trend of using polyglot environments in software projects is increasing. Similarly, the trend of analyzing specific types of code comments (method comments, or inline comments) is increasing, but the studies rarely analyze class comments. We found 21 quality attributes that researchers consider to assess comment quality, and manual assessment is still the most commonly used technique to assess various quality attributes. Our analysis of developer commenting practices showed that developers embed a mixed level of details in class comments, ranging from high-level class overviews to low-level implementation details across programming languages. They follow style guidelines regarding what information to write in class comments but violate the structure and syntax guidelines. They primarily face problems locating relevant guidelines to write consistent and informative comments, verifying the adherence of their comments to the guidelines, and evaluating the overall state of comment quality. To help researchers and developers in building comment quality assessment tools, we contribute: (i) a systematic literature review (SLR) of ten years (2010–2020) of research on assessing comment quality, (ii) a taxonomy of quality attributes used to assess comment quality, (iii) an empirically validated taxonomy of class comment information types from three programming languages, (iv) a multi-programming-language approach to automatically identify the comment information types, (v) an empirically validated taxonomy of comment convention-related questions and recommendation from various Q&A forums, and (vi) a tool to gather discussions from multiple developer sources, such as Stack Overflow, and mailing lists. Our contributions provide various kinds of empirical evidence of the developer’s interest in reducing efforts in the software documentation process, of the limited support developers get in automatically assessing comment quality, and of the challenges they face in writing high-quality comments. This work lays the foundation for future effective comment quality assessment tools and techniques

    Identifying reusable knowledge in developer instant messaging communication.

    Get PDF
    Context and background: Software engineering is a complex and knowledge-intensive activity. Required knowledge (e.g., about technologies, frameworks, and design decisions) changes fast and the knowledge needs of those who design, code, test and maintain software constantly evolve. On the other hand, software developers use a wide range of processes, practices and tools where developers explicitly and implicitly “produce” and capture different types of knowledge. Problem: Software developers use instant messaging tools (e.g., Slack, Microsoft Teams and Gitter) to discuss development-related problems, share experiences and to collaborate in projects. This communication takes place in chat rooms that accumulate potentially relevant knowledge to be reused by other developers. Therefore, in this research we analyze whether there is reusable knowledge in developer instant messaging communication by exploring (a) which instant messaging platforms can be a source of reusable knowledge, and (b) software engineering themes that represent the main discussions of developers in instant messaging communication. We also analyze how this reusable knowledge can be identified with the use of topic modeling (a natural language processing technique to discover abstract topics in text) by (c) surveying the literature on how topic modeling has been applied in software engineering research, and (d) evaluating how topic models perform with developer instant messages. Method: First, we conducted a Field Study through an exploratory case study and a reflexive thematic analysis to check whether there is reusable knowledge in developer instant messaging communication, and if so, what this knowledge (main themes discussed) is. Then, we conducted a Sample Study to explore how reusable knowledge in developer instant messaging communication can we identified. In this study, we applied a literature survey and software repository mining (i.e. short text topic modeling). Findings and contributions: We (a) developed a comparison framework for instant messaging tools, (b) identified a map of the main themes discussed in chat rooms of an instant messaging tool (Gitter, a platform used by software developers), (c) provided a comprehensive literature review that offers insights and references on the use of topic modeling in software engineering, and (d) provided an evaluation of the performance of topic models applied to developer instant messages based on topic coherence metrics and human judgment for topic quality

    Large Language Models for Software Engineering: A Systematic Literature Review

    Full text link
    Large Language Models (LLMs) have significantly impacted numerous domains, notably including Software Engineering (SE). Nevertheless, a well-rounded understanding of the application, effects, and possible limitations of LLMs within SE is still in its early stages. To bridge this gap, our systematic literature review takes a deep dive into the intersection of LLMs and SE, with a particular focus on understanding how LLMs can be exploited in SE to optimize processes and outcomes. Through a comprehensive review approach, we collect and analyze a total of 229 research papers from 2017 to 2023 to answer four key research questions (RQs). In RQ1, we categorize and provide a comparative analysis of different LLMs that have been employed in SE tasks, laying out their distinctive features and uses. For RQ2, we detail the methods involved in data collection, preprocessing, and application in this realm, shedding light on the critical role of robust, well-curated datasets for successful LLM implementation. RQ3 allows us to examine the specific SE tasks where LLMs have shown remarkable success, illuminating their practical contributions to the field. Finally, RQ4 investigates the strategies employed to optimize and evaluate the performance of LLMs in SE, as well as the common techniques related to prompt optimization. Armed with insights drawn from addressing the aforementioned RQs, we sketch a picture of the current state-of-the-art, pinpointing trends, identifying gaps in existing research, and flagging promising areas for future study

    Automatic Prediction of Rejected Edits in Stack Overflow

    Full text link
    The content quality of shared knowledge in Stack Overflow (SO) is crucial in supporting software developers with their programming problems. Thus, SO allows its users to suggest edits to improve the quality of a post (i.e., question and answer). However, existing research shows that many suggested edits in SO are rejected due to undesired contents/formats or violating edit guidelines. Such a scenario frustrates or demotivates users who would like to conduct good-quality edits. Therefore, our research focuses on assisting SO users by offering them suggestions on how to improve their editing of posts. First, we manually investigate 764 (382 questions + 382 answers) rejected edits by rollbacks and produce a catalog of 19 rejection reasons. Second, we extract 15 texts and user-based features to capture those rejection reasons. Third, we develop four machine learning models using those features. Our best-performing model can predict rejected edits with 69.1% precision, 71.2% recall, 70.1% F1-score, and 69.8% overall accuracy. Fourth, we introduce an online tool named EditEx that works with the SO edit system. EditEx can assist users while editing posts by suggesting the potential causes of rejections. We recruit 20 participants to assess the effectiveness of EditEx. Half of the participants (i.e., treatment group) use EditEx and another half (i.e., control group) use the SO standard edit system to edit posts. According to our experiment, EditEx can support SO standard edit system to prevent 49% of rejected edits, including the commonly rejected ones. However, it can prevent 12% rejections even in free-form regular edits. The treatment group finds the potential rejection reasons identified by EditEx influential. Furthermore, the median workload suggesting edits using EditEx is half compared to the SO edit system.Comment: Accepted for publication in Empirical Software Engineering (EMSE) journa

    Improving Software Dependability through Documentation Analysis

    Get PDF
    Software documentation contains critical information that describes a system’s functionality and requirements. Documentation exists in several forms, including code comments, test plans, manual pages, and user manuals. The lack of documentation in existing software systems is an issue that impacts software maintainability and programmer productivity. Since some code bases contain a large amount of documentation, we want to leverage these existing documentation to improve software dependability. Specifically, we utilize documentation to help detect software bugs and repair corrupted files, which can reduce the number of software error and failure to improve a system’s reliability (e.g., continuity of correct service). We also generate documentation (e.g., code comment) automatically to help developers understand the source code, which helps improve a system’s maintainability (e.g., ability to undergo repairs and modifications). In this thesis, we analyze software documentation and propose two branches of work, which focuses on three types of documentation including manual pages, code comments, and user manuals. The first branch of work focuses on documentation analysis because documentation contains valuable information that describes the behavior of the program. We automatically extract constraints from documentation and apply them on a dynamic analysis symbolic execution tool to find bugs in the target software, and we extract constraints manually from documentation and apply them on a structured-file parsing application to repair corrupted PDF files. The second branch of work focuses on automatic code comment generation to improve software documentation. For documentation analysis, we propose and implement DASE and DocRepair. DASE leverages automatically extracted constraints from documentation to improve a dynamic analysis symbolic execution tool. DASE guides symbolic execution to focus the testing on execution paths that execute a program’s core functionalities using constraints learned from the documentation. We evaluated DASE on 88 programs from five mature real-world software suites to detect software bugs. DASE detects 12 previously unknown bugs that symbolic execution would fail to detect when given no input constraints, 6 of which have been confirmed by the developers. In DocRepair we perform an empirical study to study and repair corrupted PDF files. We create the first dataset of 319 corrupted PDF files and conduct an empirical study on 119 real-world corrupted PDF files to study the common types of file corruption. Based on the result of the empirical study we propose a technique called DocRepair. DocRepair’s repair algorithm includes seven repair operators that utilizes manually extracted constraints from documentation to repair corrupted files. We evaluate DocRepair against three common PDF repair tools. Amongst the 1,827 collected corrupted files from over two corpora of PDF files, DocRepair can successfully repair 354 files compared to Mutool, PDFtk, and GhostScript which repair 508, 41 and 84 respectively. We also propose a technique to combine multiple repair tools called DocRepair+, which can successfully repair 751 files. In the case where there is a lack of documentation, DASE and DocRepair+ would not work. Therefore, we propose automated documentation generation to address the issue. We propose and implement CloCom+ to generate code comments by mining both existing software repositories in GitHub and a Question and Answer site, Stack Overflow. CloCom+ generated 442 unique comments for 16 Java projects. Although CloCom+ improves on previous work, SumSlice, on automatic comment generation, the quality (evaluated on completeness, conciseness, expressiveness, and usefulness) and yield (number of generated comments) are still rather low which makes the technique not ready for real-world usage. In the future, it may be possible to combine the two proposed branches of work (documentation analysis and documentation generation) to further improve software dependability. For example, we can extract constraints from the automatically generated documentation (e.g., code comments)
    • 

    corecore