322 research outputs found

    Improving the evaluation of change requests using past cases

    Get PDF
    As one of the leading causes of project failures, requirements changes are inevitable in any software project. Hence, we propose an intelligent approach to facilitate the risk analysis of a change request by providing information about past cases found in similar change requests, the solutions adopted, and a support tool. The proposed approach uses case-based reasoning to retrieve previous cases similar to the current case. This approach also uses association rules to analyze patterns in the dataset and calculate the probability of risks associated with change requests. We prepared a case study to validate the proposal by analyzing the most frequent challenges in change management and considering how it can solve or minimize such problems. Results show that the proposed approach successfully assists decision-making, predicts potential risks, and suggests coherent solutions to the user. We have developed a support tool to evaluate this approach in practice with experts and obtained four different outcomes. Only a small set of cases failed to provide relevant results to the user. The use of case-based reasoning and association rules has proven to be advantageous in change management despite validity threats associated with the small number of test cases and experts involved

    Improving the evaluation of change requests using past cases

    Get PDF
    As one of the leading causes of project failures, requirements changes are inevitable in any software project. Hence, we propose an intelligent approach to facilitate the risk analysis of a change request by providing information about past cases found in similar change requests, the solutions adopted, and a support tool. The proposed approach uses case-based reasoning to retrieve previous cases similar to the current case. This approach also uses association rules to analyze patterns in the dataset and calculate the probability of risks associated with change requests. We prepared a case study to validate the proposal by analyzing the most frequent challenges in change management and considering how it can solve or minimize such problems. Results show that the proposed approach successfully assists decision-making, predicts potential risks, and suggests coherent solutions to the user. We have developed a support tool to evaluate this approach in practice with experts and obtained four different outcomes. Only a small set of cases failed to provide relevant results to the user. The use of case-based reasoning and association rules has proven to be advantageous in change management despite validity threats associated with the small number of test cases and experts involved

    Lessons Learned from Applying Social Network Analysis on an Industrial Free/Libre/Open Source Software Ecosystem

    Get PDF
    Many software projects are no longer done in-house by a single organization. Instead, we are in a new age where software is developed by a networked community of individuals and organizations, which base their relations to each other on mutual interest. Paradoxically, recent research suggests that software development can actually be jointly-developed by rival firms. For instance, it is known that the mobile-device makers Apple and Samsung kept collaborating in open source projects while running expensive patent wars in the court. Taking a case study approach, we explore how rival firms collaborate in the open source arena by employing a multi-method approach that combines qualitative analysis of archival data (QA) with mining software repositories (MSR) and Social Network Analysis (SNA). While exploring collaborative processes within the OpenStack ecosystem, our research contributes to Software Engineering research by exploring the role of groups, sub-communities and business models within a high-networked open source ecosystem. Surprising results point out that competition for the same revenue model (i.e., operating conflicting business models) does not necessary affect collaboration within the ecosystem. Moreover, while detecting the different sub-communities of the OpenStack community, we found out that the expected social tendency of developers to work with developers from same firm (i.e., homophily) did not hold within the OpenStack ecosystem. Furthermore, while addressing a novel, complex and unexplored open source case, this research also contributes to the management literature in coopetition strategy and high-tech entrepreneurship with a rich description on how heterogeneous actors within a high-networked ecosystem (involving individuals, startups, established firms and public organizations) joint-develop a complex infrastructure for big-data in the open source arena.Comment: As accepted by the Journal of Internet Services and Applications (JISA

    EMPIRICAL ASSESSMENT OF THE IMPACT OF USING AUTOMATIC STATIC ANALYSIS ON CODE QUALITY

    Get PDF
    Automatic static analysis (ASA) tools analyze the source or compiled code looking for violations of recommended programming practices (called issues) that might cause faults or might degrade some dimensions of software quality. Antonio Vetro' has focused his PhD in studying how applying ASA impacts software quality, taking as reference point the different quality dimensions specified by the standard ISO/IEC 25010. The epistemological approach he used is that one of empirical software engineering. During his three years PhD, he's been conducting experiments and case studies on three main areas: Functionality/Reliability, Performance and Maintainability. He empirically proved that specific ASA issues had impact on these quality characteristics in the contexts under study: thus, removing them from the code resulted in a quality improvement. Vetro' has also investigated and proposed new research directions for this field: using ASA to improve software energy efficiency and to detect the problems deriving from the interaction of multiple languages. The contribution is enriched with the final recommendation of a generalized process for researchers and practitioners with a twofold goal: improve software quality through ASA and create a body of knowledge on the impact of using ASA on specific software quality dimensions, based on empirical evidence. This thesis represents a first step towards this goa

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    Search-Based Test Generation Targeting Non-Functional Quality Attributes of Android Apps

    Get PDF
    Mobile apps form a major proportion of the software marketplace and it is crucial to ensure that they meet both functional and nonfunctional quality thresholds. Automated test input generation can reduce the cost of the testing process. However, existing Android test generation approaches are focused on code coverage and cannot be customized to a tester\u27s diverse goals---in particular, quality attributes such as resource use.We propose a flexible multi-objective search-based test generation framework for interface testing of Android apps---STGFA-SMOG. This framework allows testers to target a variety of fitness functions, corresponding to different software quality attributes, code coverage, and other test case properties. We find that STGFA-SMOG outperforms random test generation in exposing potential quality issues and triggering crashes. Our study also offers insights on how different combinations of fitness functions can affect test generation for Android apps

    Automated Software Transplantation

    Get PDF
    Automated program repair has excited researchers for more than a decade, yet it has yet to find full scale deployment in industry. We report our experience with SAPFIX: the first deployment of automated end-to-end fault fixing, from test case design through to deployed repairs in production code. We have used SAPFIX at Facebook to repair 6 production systems, each consisting of tens of millions of lines of code, and which are collectively used by hundreds of millions of people worldwide. In its first three months of operation, SAPFIX produced 55 repair candidates for 57 crashes reported to SAPFIX, of which 27 have been deem as correct by developers and 14 have been landed into production automatically by SAPFIX. SAPFIX has thus demonstrated the potential of the search-based repair research agenda by deploying, to hundreds of millions of users worldwide, software systems that have been automatically tested and repaired. Automated software transplantation (autotransplantation) is a form of automated software engineering, where we use search based software engineering to be able to automatically move a functionality of interest from a ‘donor‘ program that implements it into a ‘host‘ program that lacks it. Autotransplantation is a kind of automated program repair where we repair the ‘host‘ program by augmenting it with the missing functionality. Automated software transplantation would open many exciting avenues for software development: suppose we could autotransplant code from one system into another, entirely unrelated, system, potentially written in a different programming language. Being able to do so might greatly enhance the software engineering practice, while reducing the costs. Automated software transplantation manifests in two different flavors: monolingual, when the languages of the host and donor programs is the same, or multilingual when the languages differ. This thesis introduces a theory of automated software transplantation, and two algorithms implemented in two tools that achieve this: µSCALPEL for monolingual software transplantation and τSCALPEL for multilingual software transplantation. Leveraging lightweight annotation, program analysis identifies an organ (interesting behavior to transplant); testing validates that the organ exhibits the desired behavior during its extraction and after its implantation into a host. We report encouraging results: in 14 of 17 monolingual transplantation experiments involving 6 donors and 4 hosts, popular real-world systems, we successfully autotransplanted 6 new functionalities; and in 10 out of 10 multlingual transplantation experiments involving 10 donors and 10 hosts, popular real-world systems written in 4 different programming languages, we successfully autotransplanted 10 new functionalities. That is, we have passed all the test suites that validates the new functionalities behaviour and the fact that the initial program behaviour is preserved. Additionally, we have manually checked the behaviour exercised by the organ. Autotransplantation is also very useful: in just 26 hours computation time we successfully autotransplanted the H.264 video encoding functionality from the x264 system to the VLC media player, a task that is currently done manually by the developers of VLC, since 12 years ago. We autotransplanted call graph generation and indentation for C programs into Kate, (a popular KDE based test editor used as an IDE by a lot of C developers) two features currently missing from Kate, but requested by the users of Kate. Autotransplantation is also efficient: the total runtime across 15 monolingual transplants is 5 hours and a half; the total runtime across 10 multilingual transplants is 33 hours

    Cybersecurity: mapping the ethical terrain

    Get PDF
    This edited collection examines the ethical trade-offs involved in cybersecurity: between security and privacy; individual rights and the good of a society; and between the types of burdens placed on particular groups in order to protect others. Foreword Governments and society are increasingly reliant on cyber systems. Yet the more reliant we are upon cyber systems, the more vulnerable we are to serious harm should these systems be attacked or used in an attack. This problem of reliance and vulnerability is driving a concern with securing cyberspace. For example, a ‘cybersecurity’ team now forms part of the US Secret Service. Its job is to respond to cyber-attacks in specific environments such as elevators in a building that hosts politically vulnerable individuals, for example, state representatives. Cybersecurity aims to protect cyberinfrastructure from cyber-attacks; the concerning aspect of the threat from cyber-attack is the potential for serious harm that damage to cyber-infrastructure presents to resources and people. These types of threats to cybersecurity might simply target information and communication systems: a distributed denial of service (DDoS) attack on a government website does not harm a website in any direct way, but prevents its normal use by stifling the ability of users to connect to the site. Alternatively, cyber-attacks might disrupt physical devices or resources, such as the Stuxnet virus, which caused the malfunction and destruction of Iranian nuclear centrifuges. Cyber-attacks might also enhance activities that are enabled through cyberspace, such as the use of online media by extremists to recruit members and promote radicalisation. Cyber-attacks are diverse: as a result, cybersecurity requires a comparable diversity of approaches. Cyber-attacks can have powerful impacts on people’s lives, and so—in liberal democratic societies at least—governments have a duty to ensure cybersecurity in order to protect the inhabitants within their own jurisdiction and, arguably, the people of other nations. But, as recent events following the revelations of Edward Snowden have demonstrated, there is a risk that the governmental pursuit of cybersecurity might overstep the mark and subvert fundamental privacy rights. Popular comment on these episodes advocates transparency of government processes, yet given that cybersecurity risks represent major challenges to national security, it is unlikely that simple transparency will suffice. Managing the risks of cybersecurity involves trade-offs: between security and privacy; individual rights and the good of a society; and types of burdens placed on particular groups in order to protect others. These trade-offs are often ethical trade-offs, involving questions of how we act, what values we should aim to promote, and what means of anticipating and responding to the risks are reasonably—and publicly—justifiable. This Occasional Paper (prepared for the National Security College) provides a brief conceptual analysis of cybersecurity, demonstrates the relevance of ethics to cybersecurity and outlines various ways in which to approach ethical decision-making when responding to cyber-attacks
    corecore