1,075 research outputs found

    Improving Developer Profiling and Ranking to Enhance Bug Report Assignment

    Get PDF
    Bug assignment plays a critical role in the bug fixing process. However, bug assignment can be a burden for projects receiving a large number of bug reports. If a bug is assigned to a developer who lacks sufficient expertise to appropriately address it, the software project can be adversely impacted in terms of quality, developer hours, and aggregate cost. An automated strategy that provides a list of developers ranked by suitability based on their development history and the development history of the project can help teams more quickly and more accurately identify the appropriate developer for a bug report, potentially resulting in an increase in productivity. To automate the process of assigning bug reports to the appropriate developer, several studies have employed an approach that combines natural language processing and information retrieval techniques to extract two categories of features: one targeting developers who have fixed similar bugs before and one targeting developers who have worked on source files similar to the description of the bug. As developers document their changes through their commit messages it represents another rich resource for profiling their expertise, as the language used in commit messages typically more closely matches the language used in bug reports. In this study, we have replicated the approach presented in [32] that applies a learning-to-rank technique to rank appropriate developers for each bug report. Additionally, we have extended the study by proposing an additional set of features to better profile a developer through their commit logs and through the API project descriptions referenced in their code changes. Furthermore, we explore the appropriateness of a joint recommendation approach employing a learning-to-rank technique and an ordinal regression technique. To evaluate our model, we have considered more than 10,000 bug reports with their appropriate assignees. The experimental results demonstrate the efficiency of our model in comparison with the state-of-the-art methods in recommending developers for open bug reports

    Automated Testing and Bug Reproduction of Android Apps

    Get PDF
    The large demand of mobile devices creates significant concerns about the quality of mobile applications (apps). The corresponding increase in app complexity has made app testing and maintenance activities more challenging. During app development phase, developers need to test the app in order to guarantee its quality before releasing it to the market. During the deployment phase, developers heavily rely on bug reports to reproduce failures reported by users. Because of the rapid releasing cycle of apps and limited human resources, it is difficult for developers to manually construct test cases for testing the apps or diagnose failures from a large number of bug reports. However, existing automated test case generation techniques are ineffective in exploring most effective events that can quickly improve code coverage and fault detection capability. In addition, none of existing techniques can reproduce failures directly from bug reports. This dissertation provides a framework that employs artifact intelligence (AI) techniques to improve testing and debugging of mobile apps. Specifically, the testing approach employs a Q-network that learns a behavior model from a set of existing apps and the learned model can be used to explore and generate tests for new apps. The framework is able to capture the fine-grained details of GUI events (e.g., visiting times of events, text on the widgets) and use them as features that are fed into a deep neural network, which acts as the agent to guide the app exploration. The debugging approach focuses on automatically reproducing crashes from bug reports for mobile apps. The approach uses a combination of natural language processing (NLP), deep learning, and dynamic GUI exploration to synthesize event sequences with the goal of reproducing the reported crash

    Security assessment of open source third-parties applications

    Get PDF
    Free and Open Source Software (FOSS) components are ubiquitous in both proprietary and open source applications. In this dissertation we discuss challenges that large software vendors face when they must integrate and maintain FOSS components into their software supply chain. Each time a vulnerability is disclosed in a FOSS component, a software vendor must decide whether to update the component, patch the application itself, or just do nothing as the vulnerability is not applicable to the deployed version that may be old enough to be not vulnerable. This is particularly challenging for enterprise software vendors that consume thousands of FOSS components, and offer more than a decade of support and security fixes for applications that include these components. First, we design a framework for performing security vulnerability experimentations. In particular, for testing known exploits for publicly disclosed vulnerabilities against different versions and software configurations. Second, we provide an automatic screening test for quickly identifying the versions of FOSS components likely affected by newly disclosed vulnerabilities: a novel method that scans across the entire repository of a FOSS component in a matter of minutes. We show that our screening test scales to large open source projects. Finally, for facilitating the global security maintenance of a large portfolio of FOSS components, we discuss various characteristics of FOSS components and their potential impact on the security maintenance effort, and empirically identify the key drivers

    Identification and analysis of chunks in software projects

    Get PDF
    Most software systems undergo continuous change in different phases of their lifecycle such as development or maintenance. Ideally, such changes should correspond to a system\u27s modular design. However, some changes span across more than one component thereby resulting in discrepancies between design and implementation. In such cases, making a change to one component requires changes to other components leading to an increase in time and effort to make changes to a software system as it evolves. This thesis investigates: 1) an approach to observe how components change together by identifying tightly coupled changes known as chunks, 2) whether there are any trends in how chunks evolve over time, and 3) whether chunks can help identify design issues in a software system. In this work, a family of algorithms is proposed to identify independently changing chunks from change data obtained from mining version history repositories of three large software systems - Moodle, Eclipse, and Company-X. A comprehensive analysis of certain characteristics of the resulting chunks is conducted. In addition, evolution of chunks with respect to size in terms of number of files within a chunk, and percentage of changes crossing a chunk are studied. Lastly, a pragmatic interpretation of the results to identify necessary code refactoring or system redesign is presented. The findings of this work show that the percentage correlation of a chunk decreases with an increase in the number of inter-component or subsystem couplings. We also observed that there is no association between chunk size and percentage correlation. Identifying chunks that merge helps in a better understanding of the inconsistencies between how a system is designed for change and how it is actually changed, and to identify areas of a system that require refactoring or redesign. Additionally, identifying stable chunks can provide insights into how size and percentage correlation of the corresponding empirical components change over time

    The Software Vulnerability Ecosystem: Software Development In The Context Of Adversarial Behavior

    Get PDF
    Software vulnerabilities are the root cause of many computer system security fail- ures. This dissertation addresses software vulnerabilities in the context of a software lifecycle, with a particular focus on three stages: (1) improving software quality dur- ing development; (2) pre- release bug discovery and repair; and (3) revising software as vulnerabilities are found. The question I pose regarding software quality during development is whether long-standing software engineering principles and practices such as code reuse help or hurt with respect to vulnerabilities. Using a novel data-driven analysis of large databases of vulnerabilities, I show the surprising result that software quality and software security are distinct. Most notably, the analysis uncovered a counterintu- itive phenomenon, namely that newly introduced software enjoys a period with no vulnerability discoveries, and further that this “Honeymoon Effect” (a term I coined) is well-explained by the unfamiliarity of the code to malicious actors. An important consequence for code reuse, intended to raise software quality, is that protections inherent in delays in vulnerability discovery from new code are reduced. The second question I pose is the predictive power of this effect. My experimental design exploited a large-scale open source software system, Mozilla Firefox, in which two development methodologies are pursued in parallel, making that the sole variable in outcomes. Comparing the methodologies using a novel synthesis of data from vulnerability databases, These results suggest that the rapid-release cycles used in agile software development (in which new software is introduced frequently) have a vulnerability discovery rate equivalent to conventional development. Finally, I pose the question of the relationship between the intrinsic security of software, stemming from design and development, and the ecosystem into which the software is embedded and in which it operates. I use the early development lifecycle to examine this question, and again use vulnerability data as the means of answering it. Defect discovery rates should decrease in a purely intrinsic model, with software maturity making vulnerabilities increasingly rare. The data, which show that vulnerability rates increase after a delay, contradict this. Software security therefore must be modeled including extrinsic factors, thus comprising an ecosystem

    Quantifying the security risk of discovering and exploiting software vulnerabilities

    Get PDF
    2016 Summer.Includes bibliographical references.Most of the attacks on computer systems and networks are enabled by vulnerabilities in a software. Assessing the security risk associated with those vulnerabilities is important. Risk mod- els such as the Common Vulnerability Scoring System (CVSS), Open Web Application Security Project (OWASP) and Common Weakness Scoring System (CWSS) have been used to qualitatively assess the security risk presented by a vulnerability. CVSS metrics are the de facto standard and its metrics need to be independently evaluated. In this dissertation, we propose using a quantitative approach that uses an actual data, mathematical and statistical modeling, data analysis, and measurement. We have introduced a novel vulnerability discovery model, Folded model, that estimates the risk of vulnerability discovery based on the number of residual vulnerabilities in a given software. In addition to estimating the risk of vulnerabilities discovery of a whole system, this dissertation has furthermore introduced a novel metrics termed time to vulnerability discovery to assess the risk of an individual vulnerability discovery. We also have proposed a novel vulnerability exploitability risk measure termed Structural Severity. It is based on software properties, namely attack entry points, vulnerability location, the presence of the dangerous system calls, and reachability analysis. In addition to measurement, this dissertation has also proposed predicting vulnerability exploitability risk using internal software metrics. We have also proposed two approaches for evaluating CVSS Base metrics. Using the availability of exploits, we first have evaluated the performance of the CVSS Exploitability factor and have compared its performance to Microsoft (MS) rating system. The results showed that exploitability metrics of CVSS and MS have a high false positive rate. This finding has motivated us to conduct further investigation. To that end, we have introduced vulnerability reward programs (VRPs) as a novel ground truth to evaluate the CVSS Base scores. The results show that the notable lack of exploits for high severity vulnerabilities may be the result of prioritized fixing of vulnerabilities

    Supporting Development Decisions with Software Analytics

    Get PDF
    Software practitioners make technical and business decisions based on the understanding they have of their software systems. This understanding is grounded in their own experiences, but can be augmented by studying various kinds of development artifacts, including source code, bug reports, version control meta-data, test cases, usage logs, etc. Unfortunately, the information contained in these artifacts is typically not organized in the way that is immediately useful to developers’ everyday decision making needs. To handle the large volumes of data, many practitioners and researchers have turned to analytics — that is, the use of analysis, data, and systematic reasoning for making decisions. The thesis of this dissertation is that by employing software analytics to various development tasks and activities, we can provide software practitioners better insights into their processes, systems, products, and users, to help them make more informed data-driven decisions. While quantitative analytics can help project managers understand the big picture of their systems, plan for its future, and monitor trends, qualitative analytics can enable developers to perform their daily tasks and activities more quickly by helping them better manage high volumes of information. To support this thesis, we provide three different examples of employing software analytics. First, we show how analysis of real-world usage data can be used to assess user dynamic behaviour and adoption trends of a software system by revealing valuable information on how software systems are used in practice. Second, we have created a lifecycle model that synthesizes knowledge from software development artifacts, such as reported issues, source code, discussions, community contributions, etc. Lifecycle models capture the dynamic nature of how various development artifacts change over time in an annotated graphical form that can be easily understood and communicated. We demonstrate how lifecycle models can be generated and present industrial case studies where we apply these models to assess the code review process of three different projects. Third, we present a developer-centric approach to issue tracking that aims to reduce information overload and improve developers’ situational awareness. Our approach is motivated by a grounded theory study of developer interviews, which suggests that customized views of a project’s repositories that are tailored to developer-specific tasks can help developers better track their progress and understand the surrounding technical context of their working environments. We have created a model of the kinds of information elements that developers feel are essential in completing their daily tasks, and from this model we have developed a prototype tool organized around developer-specific customized dashboards. The results of these three studies show that software analytics can inform evidence-based decisions related to user adoption of a software project, code review processes, and improved developers’ awareness on their daily tasks and activities

    What Developers Want and Need from Program Analysis: An Empirical Study

    Get PDF
    Program Analysis has been a rich and fruitful field of research for many decades, and countless high quality program analysis tools have been produced by academia. Though there are some well-known examples of tools that have found their way into routine use by practitioners, a common challenge faced by researchers is knowing how to achieve broad and lasting adoption of their tools. In an effort to understand what makes a program analyzer most attractive to developers, we mounted a multi-method investigation at Microsoft. Through interviews and surveys of developers as well as analysis of defect data, we provide insight and answers to four high level research questions that can help researchers design program analyzers meeting the needs of software developers. First, we explore what barriers hinder the adoption of program analyzers, like poorly expressed warning messages. Second, we shed light on what functionality developers want from analyzers, including the types of code issues that developers care about. Next, we answer what non-functional characteristics an analyzer should have to be widely used, how the analyzer should fit into the development process, and how its results should be reported. Finally, we investigate defects in one of Microsoft's flagship software services, to understand what types of code issues are most important to minimize, potentially through program analysis

    Improvements to Test Case Prioritisation considering Efficiency and Effectiveness on Real Faults

    Get PDF
    Despite the best efforts of programmers and component manufacturers, software does not always work perfectly. In order to guard against this, developers write test suites that execute parts of the code and compare the expected result with the actual result. Over time, test suites become expensive to run for every change, which has led to optimisation techniques such as test case prioritisation. Test case prioritisation reorders test cases within the test suite with the goal of revealing faults as soon as possible. Test case prioritisation has received a lot of research that has indicated that prioritised test suites can reveal faults faster, but due to a lack of real fault repositories available for research, prior evaluations have often been conducted on artificial faults. This thesis aims to investigate whether the use of artificial faults represents a threat to the validity of previous studies, and proposes new strategies for test case prioritisation that increase the effectiveness of test case prioritisation on real faults. This thesis conducts an empirical evaluation of existing test case prioritisation strategies on real and artificial faults, which establishes that artificial faults provide unreliable results for real faults. The study found that there are four occasions on which a strategy for test case prioritisation would be considered no better than the baseline when using one fault type, but would be considered a significant improvement over the baseline when using the other. Moreover, this evaluation reveals that existing test case prioritisation strategies perform poorly on real faults, with no strategies significantly outperforming the baseline. Given the need to improve test case prioritisation strategies for real faults, this thesis proceeds to consider other techniques that have been shown to be effective on real faults. One such technique is defect prediction, a technique that provides estimates that a class contains a fault. This thesis proposes a test case prioritisation strategy, called G-Clef, that leverages defect prediction estimates to reorder test suites. While the evaluation of G-Clef indicates that it outperforms existing test case prioritisation strategies, the average predicted location of a faulty class is 13% of all classes in a system, which shows potential for improvement. Finally, this thesis conducts an investigative study as to whether sentiments expressed in commit messages could be used to improve the defect prediction element of G-Clef. Throughout the course of this PhD, I have created a tool called Kanonizo, an open-source tool for performing test case prioritisation on Java programs. All of the experiments and strategies used in this thesis were implemented into Kanonizo

    New Approaches to Software Security Metrics and Measurements

    Get PDF
    Meaningful metrics and methods for measuring software security would greatly improve the security of software ecosystems. Such means would make security an observable attribute, helping users make informed choices and allowing vendors to ‘charge’ for it—thus, providing strong incentives for more security investment. This dissertation presents three empirical measurement studies introducing new approaches to measuring aspects of software security, focusing on Free/Libre and Open Source Software (FLOSS). First, to revisit the fundamental question of whether software is maturing over time, we study the vulnerability rate of packages in stable releases of the Debian GNU/Linux software distribution. Measuring the vulnerability rate through the lens of Debian stable: (a) provides a natural time frame to test for maturing behavior, (b) reduces noise and bias in the data (only CVEs with a Debian Security Advisory), and (c) provides a best-case assessment of maturity (as the Debian release cycle is rather conservative). Overall, our results do not support the hypothesis that software in Debian is maturing over time, suggesting that vulnerability finding-and-fixing does not scale and more effort should be invested in significantly reducing the introduction rate of vulnerabilities, e.g. via ‘security by design’ approaches like memory-safe programming languages. Second, to gain insights beyond the number of reported vulnerabilities, we study how long vulnerabilities remain in the code of popular FLOSS projects (i.e. their lifetimes). We provide the first, to the best of our knowledge, method for automatically estimating the mean lifetime of a set of vulnerabilities based on information in vulnerability-fixing commits. Using this method, we study the lifetimes of ~6 000 CVEs in 11 popular FLOSS projects. Among a number of findings, we identify two quantities of particular interest for software security metrics: (a) the spread between mean vulnerability lifetime and mean code age at the time of fix, and (b) the rate of change of the aforementioned spread. Third, to gain insights into the important human aspect of the vulnerability finding process, we study the characteristics of vulnerability reporters for 4 popular FLOSS projects. We provide the first, to the best of our knowledge, method to create a large dataset of vulnerability reporters (>2 000 reporters for >4 500 CVEs) by combining information from a number of publicly available online sources. We proceed to analyze the dataset and identify a number of quantities that, suitably combined, can provide indications regarding the health of a project’s vulnerability finding ecosystem. Overall, we showed that measurement studies carefully designed to target crucial aspects of the software security ecosystem can provide valuable insights and indications regarding the ‘quality of security’ of software. However, the road to good security metrics is still long. New approaches covering other important aspects of the process are needed, while the approaches introduced in this dissertation should be further developed and improved
    • 

    corecore