132 research outputs found

    A Denotational Interprocedural Program Slicer

    Get PDF
    This paper extends a previously developed intraprocedural denotational program slicer to handle procedures. Using the denotational approach, slices can be deïŹned in terms of the abstract syntax of the object language without the need of a control ïŹ‚ow graph or similar intermediate structure. The algorithm presented here is capable of correctly handling the interplay between function and procedure calls, side-effects, and short-circuit expression evaluation. The ability to deal with these features is required in reverse engineering of legacy systems, where code often contains side-effects

    Leveraging Evolutionary Changes for Software Process Quality

    Full text link
    Real-world software applications must constantly evolve to remain relevant. This evolution occurs when developing new applications or adapting existing ones to meet new requirements, make corrections, or incorporate future functionality. Traditional methods of software quality control involve software quality models and continuous code inspection tools. These measures focus on directly assessing the quality of the software. However, there is a strong correlation and causation between the quality of the development process and the resulting software product. Therefore, improving the development process indirectly improves the software product, too. To achieve this, effective learning from past processes is necessary, often embraced through post mortem organizational learning. While qualitative evaluation of large artifacts is common, smaller quantitative changes captured by application lifecycle management are often overlooked. In addition to software metrics, these smaller changes can reveal complex phenomena related to project culture and management. Leveraging these changes can help detect and address such complex issues. Software evolution was previously measured by the size of changes, but the lack of consensus on a reliable and versatile quantification method prevents its use as a dependable metric. Different size classifications fail to reliably describe the nature of evolution. While application lifecycle management data is rich, identifying which artifacts can model detrimental managerial practices remains uncertain. Approaches such as simulation modeling, discrete events simulation, or Bayesian networks have only limited ability to exploit continuous-time process models of such phenomena. Even worse, the accessibility and mechanistic insight into such gray- or black-box models are typically very low. To address these challenges, we suggest leveraging objectively [...]Comment: Ph.D. Thesis without appended papers, 102 page

    A document based traceability model for test management

    Get PDF
    Software testing has became more complicated in the emergence of distributed network, real-time environment, third party software enablers and the need to test system at multiple integration levels. These scenarios have created more concern over the quality of software testing. The quality of software has been deteriorating due to inefficient and ineffective testing activities. One of the main flaws is due to ineffective use of test management to manage software documentations. In documentations, it is difficult to detect and trace bugs in some related documents of which traceability is the major concern. Currently, various studies have been conducted on test management, however very few have focused on document traceability in particular to support the error propagation with respect to documentation. The objective of this thesis is to develop a new traceability model that integrates software engineering documents to support test management. The artefacts refer to requirements, design, source code, test description and test result. The proposed model managed to tackle software traceability in both forward and backward propagations by implementing multi-bidirectional pointer. This platform enabled the test manager to navigate and capture a set of related artefacts to support test management process. A new prototype was developed to facilitate observation of software traceability on all related artefacts across the entire documentation lifecycle. The proposed model was then applied to a case study of a finished software development project with a complete set of software documents called the On-Board Automobile (OBA). The proposed model was evaluated qualitatively and quantitatively using the feature analysis, precision and recall, and expert validation. The evaluation results proved that the proposed model and its prototype were justified and significant to support test management

    The relationship between search based software engineering and predictive modeling

    Full text link
    Search Based Software Engineering (SBSE) is an approach to software engineering in which search based optimization algorithms are used to identify optimal or near optimal solutions and to yield insight. SBSE techniques can cater for multiple, possibly competing objectives and/or constraints and applications where the potential solution space is large and complex. This paper will provide a brief overview of SBSE, explaining some of the ways in which it has already been applied to construction of predictive models. There is a mutually beneficial relationship between predictive models and SBSE. The paper sets out eleven open problem areas for Search Based Predictive Modeling and describes how predictive models also have role to play in improving SBSE

    The evolution of the laws of software evolution. A discussion based on a systematic literature review

    Get PDF
    After more than 40 years of life, software evolution should be considered as a mature field. However, despite such a long history, many research questions still remain open, and controversial studies about the validity of the laws of software evolution are common. During the first part of these 40 years the laws themselves evolved to adapt to changes in both the research and the software industry environments. This process of adaption to new paradigms, standards, and practices stopped about 15 years ago, when the laws were revised for the last time. However, most controversial studies have been raised during this latter period. Based on a systematic and comprehensive literature review, in this paper we describe how and when the laws, and the software evolution field, evolved. We also address the current state of affairs about the validity of the laws, how they are perceived by the research community, and the developments and challenges that are likely to occur in the coming years

    Personalized First Issue Recommender for Newcomers in Open Source Projects

    Full text link
    Many open source projects provide good first issues (GFIs) to attract and retain newcomers. Although several automated GFI recommenders have been proposed, existing recommenders are limited to recommending generic GFIs without considering differences between individual newcomers. However, we observe mismatches between generic GFIs and the diverse background of newcomers, resulting in failed attempts, discouraged onboarding, and delayed issue resolution. To address this problem, we assume that personalized first issues (PFIs) for newcomers could help reduce the mismatches. To justify the assumption, we empirically analyze 37 newcomers and their first issues resolved across multiple projects. We find that the first issues resolved by the same newcomer share similarities in task type, programming language, and project domain. These findings underscore the need for a PFI recommender to improve over state-of-the-art approaches. For that purpose, we identify features that influence newcomers' personalized selection of first issues by analyzing the relationship between possible features of the newcomers and the characteristics of the newcomers' chosen first issues. We find that the expertise preference, OSS experience, activeness, and sentiment of newcomers drive their personalized choice of the first issues. Based on these findings, we propose a Personalized First Issue Recommender (PFIRec), which employs LamdaMART to rank candidate issues for a given newcomer by leveraging the identified influential features. We evaluate PFIRec using a dataset of 68,858 issues from 100 GitHub projects. The evaluation results show that PFIRec outperforms existing first issue recommenders, potentially doubling the probability that the top recommended issue is suitable for a specific newcomer and reducing one-third of a newcomer's unsuccessful attempts to identify suitable first issues, in the median.Comment: The 38th IEEE/ACM International Conference on Automated Software Engineering (ASE 2023

    Characterizing and Diagnosing Architectural Degeneration of Software Systems from Defect Perspective

    Get PDF
    The architecture of a software system is known to degrade as the system evolves over time due to change upon change, a phenomenon that is termed architectural degeneration. Previous research has focused largely on structural deviations of an architecture from its baseline. However, another angle to observe architectural degeneration is software defects, especially those that are architecturally related. Such an angle has not been scientifically explored until now. Here, we ask two relevant questions: (1) What do defects indicate about architectural degeneration? and (2) How can architectural degeneration be diagnosed from the defect perspective? To answer question (1), we conducted an exploratory case study analyzing defect data over six releases of a large legacy system (of size approximately 20 million source lines of code and age over 20 years). The relevant defects here are those that span multiple components in the system (called multiple-component defects - MCDs). This case study found that MCDs require more changes to fix and are more persistent across development phases and releases than other types of defects. To answer question (2), we developed an approach (called Diagnosing Architectural Degeneration - DAD) from the defect perspective, and validated it in another, confirmatory, case study involving three releases of a commercial system (of size over 1.5 million source lines of code and age over 13 years). This case study found that components of the system tend to persistently have an impact on architectural degeneration over releases. Especially, such impact of a few components is substantially greater than that of other components. These results are new and they add to the current knowledge on architectural degeneration. The key conclusions from these results are: (i) analysis of MCDs is a viable approach to characterizing architectural degeneration; and (ii) a method such as DAD can be developed for diagnosing architectural degeneration

    A framework for cots software evaluation and selection for COTS mismatches handling and non-functional requirements

    Get PDF
    The decision to purchase Commercial Off-The-Shelf (COTS) software needs systematic guidelines so that the appropriate COTS software can be selected in order to provide a viable and effective solution to the organizations. However, the existing COTS software evaluation and selection frameworks focus more on functional aspects and do not give adequate attention to accommodate the mismatch between user requirements and COTS software specification, and also integration with non functional requirements of COTS software. Studies have identified that these two criteria are important in COTS software evaluation and selection. Therefore, this study aims to develop a new framework of COTS software evaluation and selection that focuses on handling COTS software mismatches and integrating the nonfunctional requirements. The study is conducted using mixed-mode methodology which involves survey and interview. The study is conducted in four main phases: a survey and interview of 63 organizations to identify COTS software evaluation criteria, development of COTS software evaluation and selection framework using Evaluation Theory, development of a new decision making technique by integrating Analytical Hierarchy Process and Gap Analysis to handle COTS software mismatches, and validation of the practicality and reliability of the proposed COTS software Evaluation and Selection Framework (COTS-ESF) using experts’ review, case studies and yardstick validation. This study has developed the COTS-ESF which consists of five categories of evaluation criteria: Quality, Domain, Architecture, Operational Environment and Vendor Reputation. It also provides a decision making technique and a complete process for performing the evaluation and selection of COTS software. The result of this study shows that the evaluated aspects of the framework are feasible and demonstrate their potential and practicality to be applied in the real environment. The contribution of this study straddles both the research and practical perspectives of software evaluation by improving decision making and providing a systematic guidelines for handling issue in purchasing viable COTS software

    Rapid Parallelization by Collaboration

    Get PDF
    The widespread adoption of Chip Multiprocessors has renewed the emphasis on the use of parallelism to improve performance. The present and growing diversity in hardware architectures and software environments, however, continues to pose difficulties in the effective use of parallelism thus delaying a quick and smooth transition to the concurrency era. In this document, we describe the research being conducted at the Computer Science Department at Columbia University on a system called COMPASS that aims to simplify this transition by providing advice to programmers considering parallelizing their code. The advice proffered to the programmer is based on the wisdom collected from programmers who have already parallelized some code. The utility of COMPASS rests, not only on its ability to collect the wisdom unintrusively but also on its ability to automatically seek, find and synthesize this wisdom into advice that is tailored to the code the user is considering parallelizing and to the environment in which the optimized program will execute in. COMPASS provides a platform and an extensible framework for sharing human expertise about code parallelization -- widely and on diverse hardware and software. By leveraging the "Wisdom of Crowds" model which has been conjunctured to scale exponentially and which has successfully worked for Wikis, COMPASS aims to enable rapid parallelization of code and thus continue to extend the benefits for Moore's law scaling to science and society
    • 

    corecore