9 research outputs found
Recommended from our members
A comparison of counting methods for software science and cyclomatic complexity
This paper reviews McCabe's cyclomatic complexity and Halstead's laws; it discusses studies in current literature relating the metrics to software. The studies are reproduced using data obtained from a large software project developed in a major electronics firm. Problems that occur when deriving the metrics are discussed; the result of computing the metrics different ways is investigated.
This study shows that when the counting method is varied, there is no significant difference in the bug-to-metric relationship. A strong relationship is shown to exist between Halstead's laws, McCabe's cyclomatic complexity, lines of code, and the number of bugs reported in the project
Complexity-based classification of software modules
Software plays a major role in many organizations. Organizational success depends partially on the quality of software used. In recent years, many researchers have recognized that statistical classification techniques are well-suited to develop software quality prediction models. Different statistical software quality models, using complexity metrics as early indicators of software quality, have been proposed in the past. At a high-level the problem of software categorization is to classify software modules into fault prone and non-fault prone. The focus of this thesis is two-fold. One is to study some selected classification techniques including unsupervised and supervised learning algorithms widely used for software categorization. The second emphasis is to explore a new unsupervised learning model, employing Bayesian and deterministic approaches. Besides, we evaluate and compare experimentally these approaches using a real data set. Our experimental results show that different algorithms lead to different statistically significant results
A software testing estimation and process control model
The control of the testing process and estimation of the resource required to perform testing is key to delivering a software product of target quality on budget. This thesis explores the use of testing to remove errors, the part that metrics and models play in this process, and considers an original method for improving the quality of a software product. The thesis investigates the possibility of using software metrics to estimate the testing resource required to deliver a product of target quality into deployment and also determine during the testing phases the correct point in time to proceed to the next testing phase in the life-cycle. Along with the metrics Clear ratio. Chum, Error rate halving. Severity shift, and faults per week, a new metric 'Earliest Visibility' is defined and used to control the testing process. EV is constructed upon the link between the point at which an error is made within development and subsequently found during testing. To increase the effectiveness of testing and reduce costs, whilst maintaining quality the model operates by each test phase being targeted at the errors linked to that test phase and the ability for each test phase to build upon the previous phase. EV also provides a measure of testing effectiveness and fault introduction rate by development phase. The resource estimation model is based on a gradual refinement of an estimate, which is updated following each development phase as more reliable data is available. Used in conjunction with the process control model, which will ensure the correct testing phase is in operation, the estimation model will have accurate data for each testing phase as input. The proposed model and metrics have been developed and tested on a large-scale (4 million LOC) industrial telecommunications product written in C and C++ running within a Unix environment. It should be possible to extend this work to suit other environments and other development life-cycles
Recommended from our members
System architecture metrics: an evaluation
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity.
The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models.
This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer
An Empirical investigation into metrics for object-oriented software
Object-Oriented methods have increased in popularity over the last decade, and are now the norm for software development in many application areas. Many claims were made for the superiority of object-oriented methods over more traditional methods, and these claims have largely been accepted, or at least not questioned by the software community. Such was the motivation for this thesis. One way of capturing information about software is the use of software metrics. However, if we are to have faith in the information, we must be satisfied that these metrics do indeed tell us what we need to know. This is not easy when
the software characteristics we are interested in are intangible and unable to be precisely defined. This thesis considers the attempts to measure software and to make predictions regarding maintainabilty and effort over the last three decades. It examines traditional software
metrics and considers their failings in the light of the calls for better standards of validation in terms of measurement theory and empirical study. From this five lessons were derived. The relatively new area of metrics for object-oriented systems is examined to determine whether suggestions for improvement have been widely heeded.
The thesis uses an industrial case study and an experiment to examine one feature of objectorientation, inheritance, and its effect on aspects of maintainability, namely number of defects and time to implement a change. The case study is also used to demonstrate that it is possible to obtain early, simple and useful local prediction systems for important attributes such as system size and defects, using readily available measures rather than attempting predefined and possibly time consuming metrics which may suffer from poor definition, invalidity or inability to predict or capture anything of real use. The thesis concludes that there is empirical evidence to suggest a hypothesis linking inheritance and increased incidence of defects and increased maintenance effort and that more empirical studies are needed in order to test the hypothesis. This suggests that we should treat claims regarding the benefits of object-orientation for maintenance with some caution. This thesis also concludes that with the ability to produce, with little effort,
accurate local metrics, we have an acceptable substitute for the large predefined metrics suites with their attendant problems
A trade-off model between cost and reliability during the design phase of software development
PhD ThesisThis work proposes a method for estimating the development cost of a software
system with modular structure taking into account the target level of reliability for
that system. The required reliability of each individual module is set in order to
meet the overall required reliability of the system. Consequently the individual cost
estimates for each module and the overall cost of the software system are linked to
the overall required reliability.
Cost estimation is carried out during the early design phase, that is, well in
advance of any detailed development. Where a satisfactory compromise between cost
and reliability is feasible, this will enable a project manager to plan the allocation
of resources to the implementation and testing phases so that the estimated total
system cost does not exceed the project budget and the estimated system reliability
matches the required target.
The line of argument developed here is that the operational reliability of a software
module can be linked to the effort spent during the testing phase. That is, a
higher level of desired reliability will require more testing effort and will therefore
cost more. A method is developed which enable us to estimate the cost of development
based on an estimate of the number of faults to be found and fixed, in
order to achieve the required reliability, using data obtained from the requirements
specification and historical data.
Using Markov analysis a method is proposed for allocating an appropriate reliability
requirement to each module of a modular software system. A formula to
calculate an estimate of the overall system reliability is established. Using this formula,
a procedure to allocate the reliability requirement for each module is derived
using a minimization process, which takes into account the stipulated overall required
level of reliability. This procedure allow us to construct some scenarios for
cost and the overall required reliability.
The foremost application of the outcome of this work is to establish a basis for
a trade-off model between cost and reliability during the design phase of the development
of a modular software system. The proposed model is easy to understand
and suitable for use by a project manager.CNPq - Conselho Nacional de Pesquisa, Brazil
Software test and evaluation study phase I and II : survey and analysis
Issued as Final report, Project no. G-36-661 (continues G-36-636; includes A-2568
Quantitative Estimates of Debugging Requirements
A major portion of the problems associated with software development might be blamed on the lack of appropriate tools to aid in the planning and testing phases of software projects. As one step towards solving this problem, this paper presents a model to estimate the number of bugs remaining in a system at the beginning of the testing and integration phases of development. The model, based on software science metrics, was tested using data currently available in the literature. Extensions to the model are also presented which can be used to obtain such estimates as the expected amount of personnel and computer time required for project validation. Copyright © 1979 by The Institute of Electrical and Electronics Engineers, Inc