9 research outputs found

    The dance of classes:a stochastic model for software structure evolution

    Get PDF
    In this study, we investigate software structure evolution and growth. We represent software structure by means of a generic macro-topology called Little House, which models the dependencies among classes of object-oriented software systems. We, then, define a stochastic model to predict the way software architectures evolve. The model estimates how the classes of object-oriented programs get connected one to another along the evolution of the systems. To define the model, we analyzed data from 81 versions of six Java based projects. We analyzed each pair of sequential versions, for each project, in order to depict a pattern of software structure evolution based on Little House. To evaluate the model, we performed two experiments: one with the data used to derive the model, and another with data of 35 releases, in total, of four open-source Java project. In both experiments, we found a very low rate of error for the application of the proposed model. The evaluation of the model suggests it is able to predict how a software structure will evolve

    Web Framework Points: an Effort Estimation Methodology for Web Application Development

    Get PDF
    Software effort estimation is one of the most critical components of a successful software project: completing the project on time and within budget is the classic challenge for all project managers. However, predictions made by project managers about their project are often inexact: software projects need, on average, 30-40% more effort than estimated. Research on software development effort and cost estimation has been abundant and diversified since the end of the Seventies. The topic is still very much alive, as shown by the numerous works existing in the literature. During these three years of research activity, I had the opportunity to go into the knowledge and to experiment some of the main software effort estimation methodologies existing in literature. In particular, I focused my research on Web effort estimation. As stated by many authors, the existing models for classic software applications are not well suited to measure the effort of Web applications, that unfortunately are not exempt from cost and time overruns, as traditional software projects. Initially, I compared the effectiveness of Albrecht's classic Function Points (FP) and Reifer's Web Objects (WO) metrics in estimating development effort for Web applications, in the context of an Italian software company. I tested these metrics on a dataset made of 24 projects provided by the software company between 2003 and 2010. I compared the estimate data with the real effort of each project completely developed, using the MRE (Magnitude of Relative Error) method. The experimental results showed a high error in estimates when using WO metric, which proved to be more effective than the FP metric in only two occurrences. In the context of this first work, it appeared evident that effort estimation depends not only on functional size measures, but other factors had to be considered, such as model accuracy and other challenges specific to Web applications; though the former represent the input that influences most the final results. For this reason, I revised the WO methodology, creating the RWO methodology. I applied this methodology to the same dataset of projects, comparing the results to those gathered by applying the FP and WO methods. The experimental results showed that the RWO method reached effort prediction results that are comparable to – and in 4 cases even better than – the FP method. Motivated by the dominant use of Content Management Framework (CMF) in Web application development and the inadequacy of the RWO method when used with the latest Web application development tools, I finally chose to focus my research on the study of a new Web effort estimation methodology for Web applications developed with a CMF. I proposed a new methodology for effort estimation: the Web CMF Objects one. In this methodology, new key elements for analysis and planning were identified; they allow to define every important step in the development of a Web application using a CMF. Following the RWO method approach, the estimated effort of a Web project stems from the sum of all elements, each of them weighted with its own complexity. I tested the whole methodology on 9 projects provided by three different Italian software companies, comparing the value of the effort estimate to the actual, final effort of each project, in man-days. I then compared the effort estimate both with values obtained from the Web CMF Objects methodology and with those obtained from the respective effort estimation methodologies of the three companies, getting excellent results: a value of Pred(0.25) equal to 100% for the Web CMF Objects methodology. Recently, I completed the presentation and assessment of Web CMF Objects methodology, upgrading the cost model for the calculation of effort estimation. I named it again Web Framework Points methodology. I tested the updated methodology on 19 projects provided by three software companies, getting good results: a value of Pred(0.25) equal to 79%. The aim of my research is to contribute to reducing the estimation error in software development projects developed through Content Management Frameworks, with the purpose to make the Web Framework Points methodology a useful tool for software companies

    Exploiting natural language structures in software informal documentation

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Communication means, such as issue trackers, mailing lists, Q&A forums, and app reviews, are premier means of collaboration among developers, and between developers and end-users. Analyzing such sources of information is crucial to build recommenders for developers, for example suggesting experts, re-documenting source code, or transforming user feedback in maintenance and evolution strategies for developers. To ease this analysis, in previous work we proposed DECA (Development Emails Content Analyzer), a tool based on Natural Language Parsing that classifies with high precision development emails' fragments according to their purpose. However, DECA has to be trained through a manual tagging of relevant patterns, which is often effort-intensive, error-prone and requires specific expertise in natural language parsing. In this paper, we first show, with a study involving Master's and Ph.D. students, the extent to which producing rules for identifying such patterns requires effort, depending on the nature and complexity of patterns. Then, we propose an approach, named NEON (Nlp-based softwarE dOcumentation aNalyzer), that automatically mines such rules, minimizing the manual effort. We assess the performances of NEON in the analysis and classification of mobile app reviews, developers discussions, and issues. NEON simplifies the patterns' identification and rules' definition processes, allowing a savings of more than 70% of the time otherwise spent on performing such activities manually. Results also show that NEON-generated rules are close to the manually identified ones, achieving comparable recall

    On negative results when using sentiment analysis tools for software engineering research

    Get PDF
    Recent years have seen an increasing attention to social aspects of software engineering, including studies of emotions and sentiments experienced and expressed by the software developers. Most of these studies reuse existing sentiment analysis tools such as SentiStrength and NLTK. However, these tools have been trained on product reviews and movie reviews and, therefore, their results might not be applicable in the software engineering domain. In this paper we study whether the sentiment analysis tools agree with the sentiment recognized by human evaluators (as reported in an earlier study) as well as with each other. Furthermore, we evaluate the impact of the choice of a sentiment analysis tool on software engineering studies by conducting a simple study of differences in issue resolution times for positive, negative and neutral texts. We repeat the study for seven datasets (issue trackers and Stack Overflow questions) and different sentiment analysis tools and observe that the disagreement between the tools can lead to diverging conclusions. Finally, we perform two replications of previously published studies and observe that the results of those studies cannot be confirmed when a different sentiment analysis tool is used

    Assessing and improving quality of QVTo model transformations

    Get PDF
    We investigate quality improvement in QVT operational mappings (QVTo) model transformations, one of the languages defined in the OMG standard on model-to-model transformations. Two research questions are addressed. First, how can we assess quality of QVTo model transformations? Second, how can we develop higher-quality QVTo transformations? To address the first question, we utilize a bottom–up approach, starting with a broad exploratory study including QVTo expert interviews, a review of existing material, and introspection. We then formalize QVTo transformation quality into a QVTo quality model. The quality model is validated through a survey of a broader group of QVTo developers. We find that although many quality properties recognized as important for QVTo do have counterparts in general purpose languages, a number of them are specific to QVTo or model transformation languages. To address the second research question, we leverage the quality model to identify developer support tooling for QVTo. We then implemented and evaluated one of the tools, namely a code test coverage tool. In designing the tool, code coverage criteria for QVTo model transformations are also identified. The primary contributions of this paper are a QVTo quality model relevant to QVTo practitioners and an open-source code coverage tool already usable by QVTo transformation developers. Secondary contributions are a bottom–up approach to building a quality model, a validation approach leveraging developer perceptions to evaluate quality properties, code test coverage criteria for QVTo, and numerous directions for future research and tooling related to QVTo quality

    A software development methodology for solo software developers: leveraging the product quality of independent developers

    Get PDF
    Software security for agile methods, particularly for those designed for individual developers, is still a major concern. With most software products deployed over the Internet, security as a key component of software quality has become a major problem. In addressing this problem, this research proposes a solo software development methodology (SSDM) that uses as minimum resources as possible, at the same time conforming to the best practice for delivering secure and high-quality software products. Agile methods have excelled on delivering timely and quality software. At the same time research also shows that most agile methods do not address the problem of security in the developed software. A metasynthesis of SSDMs conducted in this thesis confirmed the lack practices that promote security in the developed software product. On the other hand, some researchers have demonstrated the feasibility of incorporating existing lightweight security practices into agile methods. This research uses Design Science Research (DSR) to build, demonstrate and evaluate a lightweight SSDM. Using an algorithm adapted for the purpose, the research systematically integrates lightweight security and quality practices to produce an agile secure-solo software development methodology (Secure-SSDM). A multiple-case study in an academic and industry setting is conducted to demonstrate and evaluate the utility of the methodology. This demonstration and evaluation thereof, indicates the applicability of the methodology in building high-quality and secure software products. Theoretical evaluation of the agility of the Secure-SSDM using the four-dimensional analytical tool (4-DAT) shows satisfactory compliance of the methodology with agile principles. The main contributions in this thesis are: the Secure-SSDM, which entails description of the concepts, modelling languages, stages, tasks, tools and techniques; generation of a quality theory on practices that promote quality in a solo software development environment; adaptation of Keramati and Mirian-Hosseinabadi’s algorithm for the purposes of integrating quality and security practices. This research would be of value to researchers as it introduces the security component of software quality into a solo software development environment, probing more research in the area. To software developers the research has provided a lightweight methodology that builds quality and security into the product using minimum resources.School of ComputingD. Phil. (Computer Science

    Process software simulation model of Lean-Kanban Approach

    Get PDF
    Software process simulation is important for reducing errors, helping analysis of the risks and for improving software quality. In recent years, the Lean-Kanban approach has been widely applied in software practice including software development and maintenance. The Lean-Kanban approach minimizes the Work-In-Progress (WIP), which is the number of items that are worked on by the team at any given time. It has been demonstrated that such approach can help to improve software maintenance and development processes in industrial environments. The goal of the simulation model itself is to increase the understanding and to support decisions for planning such kind of projects. Considering the threats to validity of the study, the accuracy and reliability of the simulation model could be shown and the simulation model implementation allows for deriving hypothesis on the impact of distribution on parameters such as throughput. In this thesis, we describe our simulation studies, which show that the Lean-Kanban approach can indeed help to reduce the average time needed to complete maintenance or development issues. This simulation model can simulate existing maintenance and development processes that does not use a WIP limit, as well as a maintenance and development processes that adopt a WIP limit. We performed some case studies using real data collected from different projects. The results confirmthat the WIP-limited process as advocated by the Lean- Kanban approach could be useful to increase the efficiency of software maintenance and development, as reported in previous industrial practices

    Process software simulation model of Lean-Kanban Approach

    Get PDF
    Software process simulation is important for reducing errors, helping analysis of the risks and for improving software quality. In recent years, the Lean-Kanban approach has been widely applied in software practice including software development and maintenance. The Lean-Kanban approach minimizes the Work-In-Progress (WIP), which is the number of items that are worked on by the team at any given time. It has been demonstrated that such approach can help to improve software maintenance and development processes in industrial environments. The goal of the simulation model itself is to increase the understanding and to support decisions for planning such kind of projects. Considering the threats to validity of the study, the accuracy and reliability of the simulation model could be shown and the simulation model implementation allows for deriving hypothesis on the impact of distribution on parameters such as throughput. In this thesis, we describe our simulation studies, which show that the Lean-Kanban approach can indeed help to reduce the average time needed to complete maintenance or development issues. This simulation model can simulate existing maintenance and development processes that does not use a WIP limit, as well as a maintenance and development processes that adopt a WIP limit. We performed some case studies using real data collected from different projects. The results confirmthat the WIP-limited process as advocated by the Lean- Kanban approach could be useful to increase the efficiency of software maintenance and development, as reported in previous industrial practices
    corecore