19 research outputs found
Quality modelling and metrics of Web-based information systems
In recent years, the World Wide Web has become a major platform for software
applications. Web-based information systems have been involved in many areas of
our everyday life, such as education, entertainment, business, manufacturing,
communication, etc. As web-based systems are usually distributed, multimedia,
interactive and cooperative, and their production processes usually follow ad-hoc
approaches, the quality of web-based systems has become a major concern.
Existing quality models and metrics do not fully satisfy the needs of quality
management of Web-based systems. This study has applied and adapted software
quality engineering methods and principles to address the following issues, a quality
modeling method for derivation of quality models of Web-based information systems;
and the development, implementation and validation of quality metrics of key quality
attributes of Web-based information systems, which include navigability and
timeliness.
The quality modeling method proposed in this study has the following strengths. It is
more objective and rigorous than existing approaches. The quality analysis can be
conducted in the early stage of system life cycle on the design. It is easy to use and
can provide insight into the improvement of the design of systems. Results of case
studies demonstrated that the quality modeling method is applicable and practical.
Practitioners can use the modeling method to develop their own quality models.
This study is amongst the first comprehensive attempts to develop quality
measurement for Web-based information systems. First, it identified the relationship
between website structural complexity and navigability. Quality metrics of
navigability were defined, investigated and implemented. Empirical studies were
conducted to evaluate the metrics. Second, this study investigated website timeliness
and attempted to find direct and indirect measures for the quality attribute. Empirical
studies for validating such metrics were also conducted.
This study also suggests four areas of future research that may be fruitful
Property-based Software Engineering Measurement
Little theory exists in the field of software system measurement.
Concepts such as complexity, coupling, cohesion or even size are very
often subject to interpretation and appear to have inconsistent
definitions in the literature. As a consequence, there is little guidance
provided to the analyst attempting to define proper measures for specific
problems. Many controversies in the literature are simply
misunderstandings and stem from the fact that some people talk about
different measurement concepts under the same label (complexity is the
most common case).
There is a need to define unambiguously the most important
measurement concepts used in the measurement of software products. One way
of doing so is to define precisely what mathematical properties characterize
these concepts, regardless of the specific software artifacts to which these
concepts are applied. Such a mathematical framework could generate a
consensus in the software engineering community and provide a means for
better communication among researchers, better guidelines for analysts, and
better evaluation methods for commercial static analyzers for practitioners.
In this paper, we propose a mathematical framework which is generic,
because it is not specific to any particular software artifact, and
rigorous, because it is based on precise mathematical concepts. This
framework defines several important measurement concepts (size, length,
complexity, cohesion, coupling). It does not intend to be complete or fully
objective; other frameworks could have been proposed and different choices
could have been made. However, we believe that the formalisms and properties
we introduce are convenient and intuitive. In addition, we have reviewed the
literature on this subject and compared it with our work. This framework
contributes constructively to a firmer theoretical ground of software
measurement.
(Also cross-referenced as UMIACS-TR-94-119
Software Engineering Laboratory Series: Collected Software Engineering Papers
The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document
An Empirical investigation into metrics for object-oriented software
Object-Oriented methods have increased in popularity over the last decade, and are now the norm for software development in many application areas. Many claims were made for the superiority of object-oriented methods over more traditional methods, and these claims have largely been accepted, or at least not questioned by the software community. Such was the motivation for this thesis. One way of capturing information about software is the use of software metrics. However, if we are to have faith in the information, we must be satisfied that these metrics do indeed tell us what we need to know. This is not easy when
the software characteristics we are interested in are intangible and unable to be precisely defined. This thesis considers the attempts to measure software and to make predictions regarding maintainabilty and effort over the last three decades. It examines traditional software
metrics and considers their failings in the light of the calls for better standards of validation in terms of measurement theory and empirical study. From this five lessons were derived. The relatively new area of metrics for object-oriented systems is examined to determine whether suggestions for improvement have been widely heeded.
The thesis uses an industrial case study and an experiment to examine one feature of objectorientation, inheritance, and its effect on aspects of maintainability, namely number of defects and time to implement a change. The case study is also used to demonstrate that it is possible to obtain early, simple and useful local prediction systems for important attributes such as system size and defects, using readily available measures rather than attempting predefined and possibly time consuming metrics which may suffer from poor definition, invalidity or inability to predict or capture anything of real use. The thesis concludes that there is empirical evidence to suggest a hypothesis linking inheritance and increased incidence of defects and increased maintenance effort and that more empirical studies are needed in order to test the hypothesis. This suggests that we should treat claims regarding the benefits of object-orientation for maintenance with some caution. This thesis also concludes that with the ability to produce, with little effort,
accurate local metrics, we have an acceptable substitute for the large predefined metrics suites with their attendant problems
Software Engineering Laboratory Series: Collected Software Engineering Papers
The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document
Weakest Pre-Condition and Data Flow Testing
Current data flow testing criteria cannot be applied to test array elements for two reasons: 1. The criteria are defined in terms of graph theory which is insufficiently expressive to investigate array elements. 2. Identifying input data which test a specified array element is an unsolvable problem. We solve the first problem by redefining the criteria without graph theory. We address the second problem with the invention of the wp_du method, which is based on Dijkstra\u27s weakest pre-condition formalism. This method accomplishes the following: Given a program, a def-use pair and a variable (which can be an array element), the method computes a logical expression which characterizes all the input data which test that def-use pair with respect to that variable. Further, for any data flow criterion, this method can be used to construct a logical expression which characterizes all test sets which satisfy that data flow criterion. Although the wp_du method cannot avoid unsolvability, it does confine the presence of unsolvability to the final step in constructing a test set
IDENTIFICATION AND QUANTIFICATION OF VARIABILITY MEASURES AFFECTING CODE REUSABILITY IN OPEN SOURCE ENVIRONMENT
Open source software (OSS) is one of the emerging areas in software engineering, and
is gaining the interest of the software development community. OSS was started as a
movement, and for many years software developers contributed to it as their hobby
(non commercial purpose). Now, OSS components are being reused in CBSD
(commercial purpose). However, recently, the use of OSS in SPL is envisioned
recently by software engineering researchers, thus bringing it into a new arena. Being
an emerging research area, it demands exploratory study to explore the dimensions of
this phenomenon. Furthermore, there is a need to assess the reusability of OSS which
is the focal point of these disciplines (CBSE, SPL, and OSS). In this research, a mixed
method based approach is employed which is specifically 'partially mixed sequential
dominant study'. It involves both qualitative (interviews) and quantitative phases
(survey and experiment). During the qualitative phase seven respondents were
involved, sample size of survey was 396, and three experiments were conducted. The
main contribution of this study is results of exploration of the phenomenon 'reuse of
OSS in reuse intensive software development'. The findings include 7 categories and
39 dimensions. One of the dimension factors affecting reusability was carried to the
quantitative phase (survey and experiment). On basis of the findings, proposal for
reusability attribute model was presented at class and package level. Variability is one
of the newly identified attribute of reusability. A comprehensive theoretical analysis
of variability implementation mechanisms is conducted to propose metrics for its
assessment. The reusability attribute model is validated by statistical analysis of I 03
classes and 77 packages. An evolutionary reusability analysis of two open source
software was conducted, where different versions of software are analyzed for their
reusability. The results show a positive correlation between variability and reusability
at package level and validate the other identified attributes. The results would be
helpful to conduct further studies in this area
Recommended from our members
INFUSE Test Management
This technical report consists of the two papers discussing testing technology. INFUSE: Integration Testing with Crowd Control describes the test management facilities provided by the lNFUSE change management system. lNFUSE partially automates the construction of test harnesses and regression test suites at each level of the integration hierarchy from components available from lower levels. Adequate Testing and Object-Oriented Programming applies the axions of adequate testing to object-oriented programming languages and examines their implications. Contrary to our original expectations, we discover that in the general case classes must be retested in every context of reuse
Recommended from our members
System architecture metrics: an evaluation
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity.
The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models.
This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer