31 research outputs found
Analyzing Halstead's counting rules in COBOL
Call number: LD2668 .R4 CMSC 1988 H86Master of ScienceComputing and Information Science
Quality metrics in software engineering
In the first part of this study software metrics are
classified into three categories: primitive, abstract
and structured. A comparative and analytical study of
metrics from these categories was performed to provide
software developers, users and management with a correct
and consistent evaluation of a representative sample of
the software metrics available in the literature. This
analysis and comparison was performed in an attempt to:
assist the software developers, users and management in
selecting suitable quality metric(s) for their specific
software quality requirements and to examine various
definitions used to calculate these metrics.
In the second part of this study an approach towards
attaining software quality is developed. This approach
is intended to help all the people concerned with the
evaluation of software quality in the earlier stages of
software systems development. The approach developed is
intended to be uniform, consistent, unambiguous and
comprehensive and one which makes the concept of software
quality more meaningful and visible. It will help the
developers both to understand the concepts of software
quality and to apply and control it according to the
expectations of users, management, customers etc.. The
clear definitions provided for the software quality terms
should help to prevent misinterpretation, and the
definitions will also serve as a touchstone against which
new ideas can be tested
Software reliability: Repetitive run experimentation and modeling
A software experiment conducted with repetitive run sampling is reported. Independently generated input data was used to verify that interfailure times are very nearly exponentially distributed and to obtain good estimates of the failure rates of individual errors and demonstrate how widely they vary. This fact invalidates many of the popular software reliability models now in use. The log failure rate of interfailure time was nearly linear as a function of the number of errors corrected. A new model of software reliability is proposed that incorporates these observations
An empirical investigation of Halstead's software length formula
Call number: LD2668 .R4 CMSC 1988 H83Master of ScienceComputing and Information Science
Recommended from our members
The safety of industrially-based controllers incorporating software
This thesis is concerned with the safety of industrial controllers which incorporate software. Software safety is compared with software reliability as a means of discussing the special concerns of safety. Definitions are given for the terms hazard, risk, danger and safe. A relationship between these terms has been attempted and the philosophy of safety is discussed. A formal definition of software safety is given. The factors influencing the development of software are examined. The subjectivity of safety is discussed in the context of safety measurement being a conjoint measurement. Methods of assessing the risk resulting from the use of software are described along with a discussion on the impracticability of using state transition diagrams to isolate catastrophic failure conditions. Categories of danger are discussed and three categories are advanced. The structuring of the software for safety is discussed and the principle of using safety modules and integrity locks is proposed. In discussing the reasons for errors remaining present in the software after testing two methods of measurement are suggested; Plexus and Fallibility Index. The need to declare variables is discussed.
An experiment involving 119 volunteers was conducted to examine the influence of the length of variable names'on the correct usage. It was found that variables with a character length of 7 have a better probability of correct interpretation than others.
The methods of assessing safety are discussed and the measurements proposed were applied to a commercially available product in the form of a Software Safety Audit.
It is concluded that some aspects of the safety of controllers incorporating software can be quantified and that further research is needed
Recommended from our members
System architecture metrics: an evaluation
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity.
The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models.
This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer
Recommended from our members
A software classification scheme
Reusing code is one approach to software reusability. Code is the end product of the software lifecycle. It is delivered in a low level representation that is difficult to reuse unless an almost perfect match exists between available features and required specifications. There is a need to organize large inventories of software such that reusable code is easy to locate and exchange. The relative success in the reuse of code fragments reported by some software factories is due in part to their capacity to encapsulate domain specific functions and create specialized libraries of components classified by these locally standardized functions.A general software classification scheme that organizes reusability related attributes and common functions from different domains is proposed as a partial solution to the software reusability problem. For the problem of selecting from similar, potentially reusable. components, a partial solution based on evaluation of common characteristics is also proposed. A library system is presented that integrates the proposed classification scheme with an evaluation mechanism based on inherent component attributes, programming languages characteristics and reuser experience.The fundamental contribution of this dissertation is a formal treatment of a faceted scheme for software classification leading to better understanding of reusability at the code level. This approach has been prototyped in a library system for the semi-automatic classification of software components. Analysis were performed to evaluate the classification scheme. The results show the potential of the scheme in organizing collections of code fragments, in improving retrieval, and in simplifying the classification process. Tests of the evaluation mechanism showed positive correlation with evaluations conducted by potential reusers
Empirical Study of the Relationship Between Static Software Complexity Metrics and Dynamic Measurements of Pascal and C Programs
Over the past 10 to 15 years, several studies showing relationships among static complexity metrics have been performed. These include the number of lines of code, McCabe's cyclomatic complexity, Halstead's Software Science metrics, control flow metrics, and information flow metrics. Other studies have examined the relationships between static metrics and effort, clarity, productivity, quality, faults, and reliability. However, there have been very few studies that explore the relationship between static complexity metrics and dynamic measurements of programs. This exploratory, empirical study examines this relationship. The issues considered in this work include data collection procedures, the development of a counting strategy, the analysis of the static and dynamic measurements collected, and the examination of the significance between pairs of these measurements. A goal is to arrive at possible hypotheses to be tested in future, more extensive, controlled experiments. The results of this study show that there are significant correlations between some of the static and dynamic measurements.Computing and Information Science