2,768 research outputs found

    Definition And Validation Of A Software Metric Based On Workload

    Get PDF
    Software "size" metrics play an important role in the field of measurement in software engineering. Size metrics help to quantify and estimate productivity, overall cost, progress, and process improvement. This thesis was a study to define a size metric based on the "workload" of the programming staff. In this context, the definition of workload is simply the total amount of code worked by the programming staff (code added, modified, and deleted in the implementation of the requirements for a version of a software product). The term "code" includes the source lines and the comment lines as well as the data files and script files required for complete implementation of the system requirements. The new metric, i.e., the Worked Lines of Code (WLOC) metric, was compared to other size metrics that have a good basis in the software industry already. Simple correlation analyses were applied to the data sets generated from four historical versions of a software project to compare the new metric to Source Lines of Code, Function Point Count, and Halstead Token Count. The main objectives of this study were to define a new metric and compare it to a number of popular and established software metrics. Using software analysis tools from various vendors, size numbers were generated for four historical versions of a substantial application program from industry. In particular, data was generated for source lines of code (SLOC), Enhancement Function Point Count, and Halstead Token Count. The data for the metrics were collected from the four historical versions of the application using a count utility designed and implemented to determine the lines of code added, modified, and deleted. The correlation study indicated strong relationships between the new metric and Function Point Count. The study found weak relationships with source lines of code and Halstead Token Count. Based on the data collected, the new metric was deemed a valid size measurement for software projects.Computer Science Departmen

    Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics

    Full text link
    In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases

    Geant4 Maintainability Assessed with Respect to Software Engineering References

    Full text link
    We report a methodology developed to quantitatively assess the maintainability of Geant4 with respect to software engineering references. The level of maintainability is determined by combining a set of metrics values whose references are documented in literature.Comment: 5 pages, 2 figures, 4 tables, IEEE NSS/MIC 201

    Data quality: Some comments on the NASA software defect datasets

    Get PDF
    Background-Self-evidently empirical analyses rely upon the quality of their data. Likewise, replications rely upon accurate reporting and using the same rather than similar versions of datasets. In recent years, there has been much interest in using machine learners to classify software modules into defect-prone and not defect-prone categories. The publicly available NASA datasets have been extensively used as part of this research. Objective-This short note investigates the extent to which published analyses based on the NASA defect datasets are meaningful and comparable. Method-We analyze the five studies published in the IEEE Transactions on Software Engineering since 2007 that have utilized these datasets and compare the two versions of the datasets currently in use. Results-We find important differences between the two versions of the datasets, implausible values in one dataset and generally insufficient detail documented on dataset preprocessing. Conclusions-It is recommended that researchers 1) indicate the provenance of the datasets they use, 2) report any preprocessing in sufficient detail to enable meaningful replication, and 3) invest effort in understanding the data prior to applying machine learners

    Using Neural Networks In Software Metrics

    Get PDF
    Software metrics provide effective methods for characterizing software. Metrics have traditionally been composed through the definition of an equation, but this approach is limited by the fact that all the interrelationships among all the parameters be fully understood. Derivation of a polynomial providing the desired characteristics is a substantial challenge. In this paper instead of using conventional methods for obtaining software metrics, we will try to use a neural network for that purpose. Experiments performed in the past on two widely known metrics, McCabe and Halstead, indicate that this approach is feasible.neural networks, software metrics, halstead, mccabe

    A neural net-based approach to software metrics

    Get PDF
    Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics

    Using neural networks in software repositories

    Get PDF
    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology
    corecore