698 research outputs found

    Rediscovery Datasets: Connecting Duplicate Reports

    Full text link
    The same defect can be rediscovered by multiple clients, causing unplanned outages and leading to reduced customer satisfaction. In the case of popular open source software, high volume of defects is reported on a regular basis. A large number of these reports are actually duplicates / rediscoveries of each other. Researchers have analyzed the factors related to the content of duplicate defect reports in the past. However, some of the other potentially important factors, such as the inter-relationships among duplicate defect reports, are not readily available in defect tracking systems such as Bugzilla. This information may speed up bug fixing, enable efficient triaging, improve customer profiles, etc. In this paper, we present three defect rediscovery datasets mined from Bugzilla. The datasets capture data for three groups of open source software projects: Apache, Eclipse, and KDE. The datasets contain information about approximately 914 thousands of defect reports over a period of 18 years (1999-2017) to capture the inter-relationships among duplicate defects. We believe that sharing these data with the community will help researchers and practitioners to better understand the nature of defect rediscovery and enhance the analysis of defect reports

    Models, Techniques, and Metrics for Managing Risk in Software Engineering

    Get PDF
    The field of Software Engineering (SE) is the study of systematic and quantifiable approaches to software development, operation, and maintenance. This thesis presents a set of scalable and easily implemented techniques for quantifying and mitigating risks associated with the SE process. The thesis comprises six papers corresponding to SE knowledge areas such as software requirements, testing, and management. The techniques for risk management are drawn from stochastic modeling and operational research. The first two papers relate to software testing and maintenance. The first paper describes and validates novel iterative-unfolding technique for filtering a set of execution traces relevant to a specific task. The second paper analyzes and validates the applicability of some entropy measures to the trace classification described in the previous paper. The techniques in these two papers can speed up problem determination of defects encountered by customers, leading to improved organizational response and thus increased customer satisfaction and to easing of resource constraints. The third and fourth papers are applicable to maintenance, overall software quality and SE management. The third paper uses Extreme Value Theory and Queuing Theory tools to derive and validate metrics based on defect rediscovery data. The metrics can aid the allocation of resources to service and maintenance teams, highlight gaps in quality assurance processes, and help assess the risk of using a given software product. The fourth paper characterizes and validates a technique for automatic selection and prioritization of a minimal set of customers for profiling. The minimal set is obtained using Binary Integer Programming and prioritized using a greedy heuristic. Profiling the resulting customer set leads to enhanced comprehension of user behaviour, leading to improved test specifications and clearer quality assurance policies, hence reducing risks associated with unsatisfactory product quality. The fifth and sixth papers pertain to software requirements. The fifth paper both models the relation between requirements and their underlying assumptions and measures the risk associated with failure of the assumptions using Boolean networks and stochastic modeling. The sixth paper models the risk associated with injection of requirements late in development cycle with the help of stochastic processes

    Best Practices for Test Driven Development

    Get PDF
    In his award-winning book, Test-driven Development By Example, Kent Beck wrote, Clean code that works...is the goal of Test-driven Development (TDD). TDD is a style of software development that first begins with the creation of tests and then makes use short, iterative development cycles until all test requirements are fulfilled. In order to provide the reader with sufficient background to understand the concepts discussed, this thesis begins by presenting a detailed description of this style of development. TDD is then contrasted with other popular styles, with a focus toward highlighting the many benefits this style offers over the others. This thesis then offers the reader a series of concrete and practical best practices that can be used in conjunction with TDD. It is the hope of the author that these lessons learned will aid those considering the adoption of this style of development avoid a number of pitfalls

    An Empirical Study on the Role of Requirement Engineering in Agile Method and Its Impact on Quality

    Get PDF
    Agile Methods are characterized as flexible and easily adaptable. The need to keep up with multiple high-priority projects and shorter time-to-market demands could explain their increasing popularity. It also raises concerns of whether or not use of these methods jeopardizes quality. Since Agile methods allow for changes throughout the process, they also create probabilities to impact software quality at any time. This thesis examines the process of requirement engineering as performed with Agile method in terms of its similarities and differences to requirement engineering as performed with the more traditional Waterfall method. It compares both approaches from a software quality perspective using a case study of 16 software projects. The main contribution of this work is to bring empirical evidence from real life cases that illustrate how Agile methods significantly impacts software quality, including the potential for a larger number of defects due to poor non-functional requirements elicitation

    Corporate governance and inequality: The impact of financialization and shareholder value

    Full text link
    Copyright © 2017 by Emerald Group Publishing Limited. Purpose - The purpose of this chapter is to analyse how in recent years the rediscovery that extreme inequality is returning to advanced economies and has become widespread. What is at issue are the causes of this inequality. It is becoming clear that the wider population, particularly in Anglo-American economies have not shared in the growing wealth of the countries concerned, and that the majority of this wealth is being transferred on a continuous and systemic basis to the very rich. Corporate governance and the pursuit of shareholder value it is argued has become a major driver of inequality. Methodology/approach - The current statistical evidence produced by leading authorities including the US Federal Reserve, World Economic Forum, Credit Suisse and Oxfam are examined. The policy of shareholder value and the mechanisms by which the distributions from business take place are investigated from a critical perspective. Findings - While the Anglo-American economies are seeing a return to the extremes of inequality last witnessed in the 19th century, the causes of this inequality are changing. In the 19th century great fortunes often were inherited, or derived by entrepreneurs from the ownership and control of productive assets. By the late 20th century as Atkinson, Piketty and Saez (2011) and others have highlighted, the sustained and rapid inflation in top income shares have made a significant contribution to the accelerating rate of income and wealth inequality. Research implications - The intensification of inequality in advanced industrial economies, despite the consistent work of Atkinson and others, was largely neglected until the recent research of Picketty which has attracted international attention. It is now acknowledged widely that inequality is a serious issue; however, the contemporary causes of inequality remain largely unexplored. Practical/social implications - The significance of inequality, now that it is recognized, demands policy and practical interventions. However, the capacity or even willingness to intervene is lacking. Further analysis of the debilitating consequences of inequality in terms of the efficiency and stability of economies and societies may encourage a more robust approach, yet the resolve to end extreme inequality is not present. Originality/value - The analysis of inequality has not been neglected and this chapter represents a pioneering effort to relate the shareholder value orientation now dominant in corporate governance to the intensification of inequality

    Enhancing the test and evaluation process: implementing agile development, test automation, and model-based systems engineering concepts

    Get PDF
    2020 Fall.Includes bibliographical references.With the growing complexity of modern systems, traditional testing methods are falling short. Test documentation suites used to verify the software for these types of large, complex systems can become bloated and unclear, leading to extremely long execution times and confusing, unmanageable test procedures. Additionally, the complexity of these systems can prevent the rapid understanding of complicated system concepts and behaviors, which is a necessary part of keeping up with the demands of modern testing efforts. Opportunities for optimization and innovation exist within the Test and Evaluation (T&E) domain, evidenced by the emergence of automated testing frameworks and iterative testing methodologies. Further opportunities lie with the directed expansion and application of related concepts such as Model-Based Systems Engineering (MBSE). This dissertation documents the development and implementation of three methods of enhancing the T&E field when applied to a real-world project. First, the development methodology of the system was transitioned from Waterfall to Agile, providing a more responsive approach when creating new features. Second, the Test Automation Framework (TAF) was developed, enabling the automatic execution of test procedures. Third, a method of test documentation using the Systems Modeling Language (SysML) was created, adopting concepts from MBSE to standardize the planning and analysis of test procedures. This dissertation provides the results of applying the three concepts to the development process of an airborne Electronic Warfare Management System (EWMS), which interfaces with onboard and offboard aircraft systems to receive and process the threat environment, providing the pilot or crew with a response solution for the protection of the aircraft. This system is representative of a traditional, long-term aerospace project that has been constantly upgraded over its lifetime. Over a two-year period, this new process produced a number of qualitative and quantitative results, including improving the quality and organization of the test documentation suite, reducing the minimum time to execute the test procedures, enabling the earlier identification of defects, and increasing the overall quality of the system under test. The application of these concepts generated many lessons learned, which are also provided. Transitioning a project's development methodology, modernizing the test approach, and introducing a new system of test documentation may provide significant benefits to the development of a system, but these types of process changes must be weighed against the needs of the project. This dissertation provides details of the effort to improve the effectiveness of the T&E process on an example project, as a framework for possible implementation on similar systems
    corecore