5 research outputs found

    Generalization of an integrated cost model and extensions to COTS, PLE and TTM

    Get PDF
    There have been existing software reuse cost models related to estimating costs of software reuse, for example, COCOMO II, COCOTS, and so on. Chmiel\u27s model [Chmiel 2000] is a generalization of these cost models. This model is different from others in that the decisions are composed of four levels and treats reuse projects from a point of view of long term run. Each level corresponds one engineering cycle. Each level is a decision making process based on calculation of NPV, ROI, and other economic indices. Reuse investment decisions are made on different levels from the corporate to the programming.;Chmiel\u27s model doesn\u27t cover Commercial-Off-The-Shelf (COTS), Product Line Engineering (PLE) and benefits due to shortened Time-To-Market (TTM) and can only deal with internal traffic since the model assumes all of the reusable components are built from scratch in house. For example, Extra efforts caused by the use of COTS, assessment, tailoring, glue code and COTS volatility, are not covered in this model. And this model doesn\u27t treat the benefits of TTM, for example, business performance.;By extending Chmiel\u27s model, the new model is applied to CBSE (Component-Based Software Engineering), COTS reuse systems and PLE. In addition, attempts of quantifying benefits of shortened TTM are made within this study and a TTM submodel is developed to cover this issue.;This study also addresses the issue to analyze and optimize corporate Return On Investment (ROI). The rational is that optimizing (maximizing) the corporate ROI under the condition that all other ROI\u27s are positive. By designing an algorithm and applying it to the data, this study discovers the method how to make the maximized value of corporate ROI.;And this model is supported through a tool based on the model rationale. Users to this tool are corporate management and development engineers. The supporting tool has user-friendly interface and allows users to input values of related parameters. A detailed report is produced, which is composed of details about costs and benefits, final Net Present Value (NPV) and ROI of each cycle

    Improving Software Development Process and Product Management with Software Project Telemetry

    Get PDF
    Software development is slow, expensive and error prone, often resulting in products with a large number of defects which cause serious problems in usability, reliability, and performance. To combat this problem, software measurement provides a systematic and empirically-guided approach to control and improve software development processes and final products. However, due to the high cost associated with "metrics collection" and difficulties in "metrics decision-making," measurement is not widely adopted by software organizations. This dissertation proposes a novel metrics-based program called "software project telemetry" to address the problems. It uses software sensors to collect metrics automatically and unobtrusively. It employs a domain-specific language to represent telemetry trends in software product and process metrics. Project management and process improvement decisions are made by detecting changes in telemetry trends and comparing trends between different periods of the same project. Software project telemetry avoids many problems inherent in traditional metrics models, such as the need to accumulate a historical project database and ensure that the historical data remain comparable to current and future projects. The claim of this dissertation is that software project telemetry provides an effective approach to (1) automated metrics collection and analysis, and (2) in-process, empirically-guided software development process problem detection and diagnosis. Two empirical studies were carried out to evaluate the claim: one in software engineering classes, and the other in the Collaborative Software Development Lab. The results suggested that software project telemetry had acceptably-low metrics collection and analysis overhead, and that it provided decision-making value at least in the exploratory context of the two studies

    July 19, 2003 (Pages 3467-3590)

    Get PDF

    Categories and Subject Descriptors

    No full text
    Most risk analysis tools and techniques require the user to enter a good deal of information before they can provide useful diagnoses. In this paper, we describe an approach to enable the user to obtain a COTS glue code integration risk analysis with no inputs other than the set of glue code cost drivers the user submits to get a glue code integration effort estimate with the COnstructive COTS integration cost estimation (COCOTS) tool. The risk assessment approach is built on a knowledge base with 24 risk identification rules and a 3-level risk probability weighting scheme obtained from an expert Delphi analysis. Each risk rule is defined as one critical combination of two COCOTS cost drivers that may cause certain undesired outcome if they are both rated at their worst case ratings. The 3-level nonlinear risk weighting scheme represents the relative probability of risk occurring with respect to the individual cost driver ratings from the input. Further, to determine the relative risk impact, we use the productivity range of each cost driver in the risky combination to reflect the cost consequence of risk occurring. We also develop a prototype called COCOTS Risk Analyzer to automate our risk assessment method. The evaluation of our approach shows that it has done an effective job of estimating the relative risk levels of both small USC e-services and large industry COTS-based applications
    corecore