96 research outputs found

    An Empirical investigation into metrics for object-oriented software

    Get PDF
    Object-Oriented methods have increased in popularity over the last decade, and are now the norm for software development in many application areas. Many claims were made for the superiority of object-oriented methods over more traditional methods, and these claims have largely been accepted, or at least not questioned by the software community. Such was the motivation for this thesis. One way of capturing information about software is the use of software metrics. However, if we are to have faith in the information, we must be satisfied that these metrics do indeed tell us what we need to know. This is not easy when the software characteristics we are interested in are intangible and unable to be precisely defined. This thesis considers the attempts to measure software and to make predictions regarding maintainabilty and effort over the last three decades. It examines traditional software metrics and considers their failings in the light of the calls for better standards of validation in terms of measurement theory and empirical study. From this five lessons were derived. The relatively new area of metrics for object-oriented systems is examined to determine whether suggestions for improvement have been widely heeded. The thesis uses an industrial case study and an experiment to examine one feature of objectorientation, inheritance, and its effect on aspects of maintainability, namely number of defects and time to implement a change. The case study is also used to demonstrate that it is possible to obtain early, simple and useful local prediction systems for important attributes such as system size and defects, using readily available measures rather than attempting predefined and possibly time consuming metrics which may suffer from poor definition, invalidity or inability to predict or capture anything of real use. The thesis concludes that there is empirical evidence to suggest a hypothesis linking inheritance and increased incidence of defects and increased maintenance effort and that more empirical studies are needed in order to test the hypothesis. This suggests that we should treat claims regarding the benefits of object-orientation for maintenance with some caution. This thesis also concludes that with the ability to produce, with little effort, accurate local metrics, we have an acceptable substitute for the large predefined metrics suites with their attendant problems

    The development and application of composite complexity models and a relative complexity metric in a software maintenance environment

    Get PDF
    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules

    Software metrics for monitoring software engineering projects

    Get PDF
    As part of the undergraduate course offered by Edith Cowan University, the Department of Computer Science has (as part of a year\u27s study) a software engineering group project. The structure of this project was divided into two units, Software Engineering l and Software Engineering 2. ln Software Engineering 1, students were given the group project where they had to complete and submit the Functional Requirement and Detail System Design documentation. In Software Engineering 2, students commenced with the implementation of the software, testing and documentation. The software was then submitted for assessment and presented to the client. To aid the students with the development of the software, the department had adopted EXECOM\u27s APT methodology as its standard guideline. Furthermore, the students were divided into groups of 4 to 5, each group working on the same problem. A staff adviser was assigned to each project group. The purpose of this research exercise was to fulfil two objectives. The first objective was to ascertain whether there is a need to improve the final year software engineering project for future students by enhancing any aspect that may be regarded as deficient. The second objective was to ascertain the factors that have the most impact on the quality of the delivered software. The quality of the delivered software was measured using a variety of software metrics. Measurement of software has mostly been ignored until recently or used without true understanding of its purpose. A subsidiary objective was to gain an understanding of the worth of software measurement in the student environment One of the conclusions derived from the study suggests that teams who spent more time on software design and testing, tended to produce better quality software with less defects. The study also showed that adherence to the APT methodology led to the project being on schedule and general team satisfaction with the project management. One of the recommendations made to the project co-ordinator was that staff advisers should have sufficient knowledge of the software engineering process

    The FORTRAN static source code analyzer program (SAP) user's guide, revision 1

    Get PDF
    The FORTRAN Static Source Code Analyzer Program (SAP) User's Guide (Revision 1) is presented. SAP is a software tool designed to assist Software Engineering Laboratory (SEL) personnel in conducting studies of FORTRAN programs. SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. This document is a revision of the previous SAP user's guide, Computer Sciences Corporation document CSC/TM-78/6045. SAP Revision 1 is the result of program modifications to provide several new reports, additional complexity analysis, and recognition of all statements described in the FORTRAN 77 standard. This document provides instructions for operating SAP and contains information useful in interpreting SAP output

    A Cognitive Process Approach to Interpreting Performance on the Booklet Category Test and the Wisconsin Card Sorting Test

    Get PDF
    Modified administration techniques that relied on patient verbalization of reasoning on each item were devised. For the WCST, verbalized scores correlated highly with conventional scores. However, patterns of age, education, and IQ covariates for each scoring condition were very different, raising questions concerning what such verbalized scores measured. Further research based upon a prospective research design was suggested to address this question. Factor analysis of WCST scores for each scoring condition resulted in almost identical three-factor solutions in each case: (a) ineffective, perseverative responding; (b) nonperseverative number errors; and (c) Maintaining Set. A three-part hierarchy of response determinants for the CT was utilized, consisting of (a) concrete perceptual attributes; (b) cognitive organization of perceptual attributes into abstract patterns; and (c) relating abstract patterns to the corresponding number responses. Decision trees were devised to prescribe a set of rules for coding each score. Utilization of this approach yielded adequate test-retest reliability for recoding responses. Sets of variables for each subtest were factor analyzed, with second order factor analysis of all factors from each subtest in order to determine if common cognitive process scores on each subtest described cognitive process scores on other subtests. Results revealed similar factor solutions for each subtest, but subtest-specific factors were not predictive of similar factor scores on other subtests, except for Subtests V and VI, which are based upon the same principle. Factors related to Maintaining Set predicted most of the variance in subtest error scores. Factor scores related to Determinant Shifting were predictive of error scores to a much lesser degree than Maintaining Set factor scores. Determinant Shifting factor scores appeared to be independent of Maintaining Set factor scores, and also showed much more independence from age, education, and IQ covariates. The relationship between CT and WCST factor scores was slightly lower than the relationship between CT error scores and WCST summary scores. Suggestions for further research were discussed

    A software testing estimation and process control model

    Get PDF
    The control of the testing process and estimation of the resource required to perform testing is key to delivering a software product of target quality on budget. This thesis explores the use of testing to remove errors, the part that metrics and models play in this process, and considers an original method for improving the quality of a software product. The thesis investigates the possibility of using software metrics to estimate the testing resource required to deliver a product of target quality into deployment and also determine during the testing phases the correct point in time to proceed to the next testing phase in the life-cycle. Along with the metrics Clear ratio. Chum, Error rate halving. Severity shift, and faults per week, a new metric 'Earliest Visibility' is defined and used to control the testing process. EV is constructed upon the link between the point at which an error is made within development and subsequently found during testing. To increase the effectiveness of testing and reduce costs, whilst maintaining quality the model operates by each test phase being targeted at the errors linked to that test phase and the ability for each test phase to build upon the previous phase. EV also provides a measure of testing effectiveness and fault introduction rate by development phase. The resource estimation model is based on a gradual refinement of an estimate, which is updated following each development phase as more reliable data is available. Used in conjunction with the process control model, which will ensure the correct testing phase is in operation, the estimation model will have accurate data for each testing phase as input. The proposed model and metrics have been developed and tested on a large-scale (4 million LOC) industrial telecommunications product written in C and C++ running within a Unix environment. It should be possible to extend this work to suit other environments and other development life-cycles

    Quality modelling and metrics of Web-based information systems

    Get PDF
    In recent years, the World Wide Web has become a major platform for software applications. Web-based information systems have been involved in many areas of our everyday life, such as education, entertainment, business, manufacturing, communication, etc. As web-based systems are usually distributed, multimedia, interactive and cooperative, and their production processes usually follow ad-hoc approaches, the quality of web-based systems has become a major concern. Existing quality models and metrics do not fully satisfy the needs of quality management of Web-based systems. This study has applied and adapted software quality engineering methods and principles to address the following issues, a quality modeling method for derivation of quality models of Web-based information systems; and the development, implementation and validation of quality metrics of key quality attributes of Web-based information systems, which include navigability and timeliness. The quality modeling method proposed in this study has the following strengths. It is more objective and rigorous than existing approaches. The quality analysis can be conducted in the early stage of system life cycle on the design. It is easy to use and can provide insight into the improvement of the design of systems. Results of case studies demonstrated that the quality modeling method is applicable and practical. Practitioners can use the modeling method to develop their own quality models. This study is amongst the first comprehensive attempts to develop quality measurement for Web-based information systems. First, it identified the relationship between website structural complexity and navigability. Quality metrics of navigability were defined, investigated and implemented. Empirical studies were conducted to evaluate the metrics. Second, this study investigated website timeliness and attempted to find direct and indirect measures for the quality attribute. Empirical studies for validating such metrics were also conducted. This study also suggests four areas of future research that may be fruitful

    Developing interpretable models with optimized set reduction for identifying high risk software components

    Get PDF
    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process
    corecore