10 research outputs found

    The State of Software Measurement Practice: Results of 2006 Survey

    No full text
    In February 2006, the Software Engineering Measurement and Analysis Initiative at the Carnegie Mellon Software Engineering Institute (SEI) conducted the first in a series of yearly studies to gauge the state of the practice in software measurement. To conduct this study, a structured, self-administered survey consisting of 17 questions was distributed to a random sample of software practitioners who had contacted the SEI during 2004 and 2005. The results of this study, which are revealed in this technical report, offer these benefits: they can be used to indicate (1) what measurement definition and implementation approaches are being adopted and used by the community, (2) the most prevalent types of measures being used by organizations that develop or acquire software, and (3) what behaviors are preventing the effective use of measurement (so that these barriers can be addressed). In addition, when the studies are conducted on a periodic basis, the results can indicate trends over time

    Designing an Effective Survey

    No full text
    A survey can characterize the knowledge, attitudes, and behaviors of a large group of people through the study of a subset of them. However, to protect the validity of conclusions drawn from a survey, certain procedures must be followed throughout the process of designing, developing, and distributing the survey questionnaire. Surveys are used extensively by software and systems engineering organizations to provide insight into complex issues, assist with problem solving, and support effective decision making. This document presents a seven-stage, end-to-end process for conducting a surve

    Army Strategic Software Improvement Program (ASSIP) Survey of Army Acquisition Program Management

    No full text
    This report analyzes a survey that the Software Engineering Institute conducted on behalf of the Army Strategic Software Improvement Program (ASSIP). The survey was directed to Army program managers (PMs) and covered four areas of the acquisition system: the acquirer's environment, the developer's environment, communication between the acquirer and developer, and external factors that could affect the acquisition system. The study aimed to discover how PMs perceived major acquisition-related problem areas and to provide preliminary data upon which to base future data-gathering activities. Although the survey results were not conclusive, they indicated that requirements management was a primary area of concern among those who responded to the survey

    A Data Specification for Software Project Performance Measures: Results of a Collaboration on Performance Measurement

    No full text
    This document contains a proposed set of defined software project performance measures and influence factors that can be used by software development projects so that valid comparisons can be made between completed projects. These terms and definitions were developed using a collaborative, consensus-based approach involving the Software Engineering Institute's Software Engineering Process Management program and service provider and industry experts in the area of software project performance measurement. This document will be updated over time as feedback is obtained about its use

    Measurement and Analysis Infrastructure Diagnostic, Version 1.0: Method Definition Document

    No full text
    This document is a guidebook for conducting a Measurement and Analysis Infrastructure Diagnostic (MAID) evaluation. The MAID method is a criterion-based evaluation method that is used to assess the quality of an organization’s data and the information generated from that data. The method is organized into four phases: (1) Collaborative Planning, (2) Artifact Evaluation, (3) On-site Evaluation, and (4) Report Results. Using the MAID evaluation criteria as a guide, a MAID team systematically studies and evaluates an organization’s measurement and analysis practices by examining the organization’s data and observing how the data is manipulated during its lifecycle, from the collection of base measures to the information provided to decision makers. The outcome of a MAID evaluation is a detailed report of an organization’s strengths and weaknesses in measurement and analysis

    Measuring Systems Interoperability: Challenges and Opportunities

    No full text
    Despite laudable case-by-case efforts, there is today no method for tracking interoperability on a comprehensive or systematic basis. This technical note presents best practices for measuring systems interoperability and assisting military planners in the acquisition, development, and implementation of command, control, communications, computers, and intelligence (C4I) systems that are interoperable. The Levels of Systems Interoperability (LISI) Model, although immature, provides a structured and systematic approach for assessing and measuring interoperability throughout the systems life cycle. In addition to exploring the many complex issues surrounding the state of interoperability for military applications, next steps for promoting a deeper understanding of interoperability and recommended measures that will promote systems interoperability are presented

    Can You Trust Your Data? Establishing the Need for a Measurement and Analysis Infrastructure Diagnostic

    No full text
    An organization's measurement and analysis infrastructure directly impacts the quality of the decisions made by people at all organizational levels. Ensuring information quality is a challenge for most organizations—partly because they might not be fully aware of their own data quality levels. Without this information, they cannot know the full business impact of poor or unknown data quality or determine how to begin improving their data. This report describes common errors in measurement and analysis and the need for a criterion-based assessment method that will allow organizations to evaluate key characteristics of their measurement programs

    TSP Performance and Capability Evaluation (PACE): Customer Guide

    No full text
    <p>The Team Software Process (TSP) Performance and Capability Evaluation (PACE) process provides an objective way to evaluate software development organizations using data collected during TSP projects. This guide describes the evaluation process and lists the steps organizations and programs must complete to earn a TSP-PACE certification. It also describes how data gathered during the evaluation is used to generate a five-dimensional profile summarizing the results of the evaluation.</p

    TSP Performance and Capability Evaluation (PACE): Customer Guide

    No full text
    <p>The Team Software Process (TSP) Performance and Capability Evaluation (PACE) process provides an objective way to evaluate software development organizations using data collected during TSP projects. This guide describes the evaluation process and lists the steps organizations and programs must complete to earn a TSP-PACE certification. It also describes how data gathered during the evaluation is used to generate a five-dimensional profile summarizing the results of the evaluation.</p

    An Investigation of Techniques for Detecting Data Anomalies in Earned Value Management Data

    No full text
    Organizations rely on valid data to make informed decisions. When data integrity is compromised, the veracity of the decision-making process is likewise threatened. Detecting data anomalies and defects is an important step in understanding and improving data quality. The study described in this report investigated statistical anomaly detection techniques for identifying potential errors associated with the accuracy of quantitative earned value management (EVM) data values reported by government contractors to the Department of Defense. This research demonstrated the effectiveness of various statistical techniques for discovering quantitative data anomalies. The following tests were found to be effective when used for EVM variables that represent cumulative values: Grubbs' test, Rosner test, box plot, autoregressive integrated moving average (ARIMA), and the control chart for individuals. For variables related to contract values, the moving range control chart, moving range technique, ARIMA, and Tukey box plot were equally effective for identifying anomalies in the data. One or more of these techniques could be used to evaluate data at the point of entry to prevent data errors from being embedded and then propagated in downstream analyses. A number of recommendations regarding future work in this area are proposed in this report.</p
    corecore