12 research outputs found

    Analytical and Empirical Evaluation of Software Reuse Metrics

    Get PDF
    How much can be saved by using pre-existing (or somewhat modified) software components when developing new software systems? With the increasing adoption of reuse methods and technologies, this question becomes critical. However, directly tracking the actual cost savings due to reuse is difficult. A worthy goal would be to develop a method of measuring the savings indirectly by analyzing the code for reuse of components. The focus of this paper is to evaluate how well several published software reuse metrics measure the ``time, money and quality'' benefits of software reuse. We conduct this evaluation both analytically and empirically. On the analytic front, we first develop some properties that should arguably hold of any measure of ``time, money and quality'' benefit due to reuse. We assess several existing software reuse metrics using these properties. Empirically, we constructed a toolset (using GEN++) to gather data on all published reuse metrics from C++ code; then, using some productivity and quality data from ``nearly replicated'' student projects at the University of Maryland, we evaluate the relationship the known metrics and the process data. The results show that different reuse metrics can be used as predictors of different quality attributes, and suggest possible directions for improving the known measures. (Also cross-referenced as UMIACS-TR-95-82

    Reusing business models.

    Get PDF
    The focus of this paper is on the reuse of business models. It investigates how business models can be reused, how such reuse can be measured and what the consequences are for software development.Model; Models;

    Customization, extension and reuse of outdated hydrogeological software

    Get PDF
    Each scientist is specialized in his or her field of research and in the tools that he or she uses during the research in a specified site. Thus, he or she is the most suitable person for improving the tools by overcoming their limitations to realize faster and higher quality analysis. However, most scientists are not software developers. Hence, it is necessary to provide them with an easy approach that enables non-software developers to improve and customize their tools. This paper presents an approach for easily improving and customizing any hydrogeological software. It is the result of experiences with updating several interdisciplinary case studies. The main insights of this approachhave been demonstrated using four examples: MIX (FORTRAN-based), BrineMIX (C++-based), EasyQuim and EasyBal (both spreadsheet-based). The improved software has been proven to be a better tool for enhanced analysis by substantially reducing the computation time and the tedious processing of the input and output data files

    A reusable application framework for context-aware mobile patient monitoring systems

    Get PDF
    The development of Context-aware Mobile Patient Monitoring Systems (CaMPaMS) using wireless sensors is very complex. To overcome this problem, the Context-aware Mobile Patient Monitoring Framework (CaMPaMF) was introduced as an ideal reuse technique to enhance the overall development quality and overcome the development complexity of CaMPaMS. While a few studies have designed reusable CaMPaMFs, there has not been enough study looking at how to design and evaluate application frameworks based on multiple reusability aspects and multiple reusability evaluation approaches. Furthermore, there also has not been enough study that integrates the identified domain requirements of CaMPaMS. Therefore, the aim of this research is to design a reusable CaMPaMF for CaMPaMS. To achieve this aim, twelve methods were used: literature search, content analysis, concept matrix, feature modelling, use case assortment, domain expert review, model-driven architecture approach, static code analysis, reusability model approach, prototyping, amount of reuse calculation, and software expert review. The primary outcome of this research is a reusable CaMPaMF designed and evaluated to capture reusability from different aspects. CaMPaMF includes a domain model validated by consultant physicians as domain experts, an architectural model, a platform-independent model, a platform-specific model validated by software expert review, and three CaMPaMS prototypes for monitoring patients with hypertension, epilepsy, or diabetes, and multiple reusability evaluation approaches. This research contributes to the body of software engineering knowledge, particularly in the area of design and evaluation of reusable application frameworks. Researchers can use the domain model to enhance the understanding of CaMPaMS domain requirements, thus extend it with new requirements. Developers can also reuse and extend CaMPaMF to develop various CaMPaMS for different diseases. Software industries can also reuse CaMPaMF to reduce the need to consult domain experts and the time required to build CaMPaMS from scratch, thus reducing the development cost and time

    DISTANCE: a framework for software measure construction.

    Get PDF
    In this paper we present a framework for software measurement that is specifically suited to satisfy the measurement needs of empirical software engineering research. The framework offers an approach to measurement that builds upon the easily imagined, detected and visualised concepts of similarity and dissimilarity between software entities. These concepts are used both to model the software attributes of interest and to define the corresponding software measures. Central to the framework is a process model that embeds constructive procedures for attribute modelling and measure construction into a goal-oriented approach to empirical software engineering studies. The underlying measurement theoretic principles of our approach ensure the construct validity of the resulting measures. The approach was tested on a popular suite of object-oriented design measures. We further show that our measure construction method compares favourably to related work.Software;

    Restructuring Object -Oriented Designs Using a Metric-Driven Approach.

    Get PDF
    The benefits of object-oriented software are now widely recognized. However, methodologies that are used to develop object-oriented software are still in their infancy. There is a lack of methods to assess the quality of the various components that are derived during the development process. The design of a system is a crucial component derived during the system development process. Little attention has been given to assessing object-oriented designs to determine the goodness of the designs. There are metrics that can provide guidance for assessing the quality of the design. The objective of this research is to develop a system to evaluate object-oriented designs and to provide guidance for the restructuring of the design based on the results of the evaluation process. We identify a basic set of metrics that reflects the benefits of the object-oriented paradigm such as inheritance, encapsulation, and method interactions. Specifically, we include metrics that measure depth of inheritance, methods usage, cardinality of subclasses, coupling, class responses, and cohesion. We define techniques to evaluate the metric values on existing object-oriented designs. We then define techniques to utilize the metric values to help restructure designs so that they conform to predetermined design criteria. These methods and techniques are implemented as a part of a Design Evaluation Assistant that automates much of the evaluation and restructuring process

    Reusable framework for web application development

    Get PDF
    Web application (WA) is among the mainstream enterprise-level software solutions. One of the reasons for this trend was due to the presence of Web application framework (WAF) that in many ways has helped web developer to implement WA as an enterprise system. However, there are complexity issues faced by the developers when using existing WAFs as reported by the developers themselves. This study is proposed to find a solution to this particular issue by investigating generic issues that arise when developers utilize Web as a platform to deliver enterprise-level application. The investigation involves the identification of problems and challenges imposed by the architecture and technology of the Web itself, study of software engineering (SE) knowledge adaptation for WA development, determination of factors that contribute to the complexity of WAF implementation, and study of existing solutions for WA development proposed by previous works. To better understand the real issues faced by the developers, handson experiment was conducted through development testing performed on selected WAFs. A new highly reusable WAF is proposed, which is derived from the experience of developing several WAs case studies guided by the theoretical and technical knowledge previously established in the study. The proposed WAF was quantitatively and statistically evaluated in terms of its reusability and usability to gain insight into the complexity of the development approach proposed by the WAF. Reuse analysis results demonstrated that the proposed WAF has exceeded the minimum target of 75% reuse at both the component and system levels while the usability study results showed that almost all (15 out of 16) of the questionnaire items used to measure users’ attitudes towards the WAF were rated at least moderately by the respondents

    Software Engineering Laboratory Series: Collected Software Engineering Papers

    Get PDF
    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document

    Data cleaning techniques for software engineering data sets

    Get PDF
    Data quality is an important issue which has been addressed and recognised in research communities such as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. Empirical software engineering has neglected the issue of data quality to some extent. This fact poses the question of how researchers in empirical software engineering can trust their results without addressing the quality of the analysed data. One widely accepted definition for data quality describes it as `fitness for purpose', and the issue of poor data quality can be addressed by either introducing preventative measures or by applying means to cope with data quality issues. The research presented in this thesis addresses the latter with the special focus on noise handling. Three noise handling techniques, which utilise decision trees, are proposed for application to software engineering data sets. Each technique represents a noise handling approach: robust filtering, where training and test sets are the same; predictive filtering, where training and test sets are different; and filtering and polish, where noisy instances are corrected. The techniques were first evaluated in two different investigations by applying them to a large real world software engineering data set. In the first investigation the techniques' ability to improve predictive accuracy in differing noise levels was tested. All three techniques improved predictive accuracy in comparison to the do-nothing approach. The filtering and polish was the most successful technique in improving predictive accuracy. The second investigation utilising the large real world software engineering data set tested the techniques' ability to identify instances with implausible values. These instances were flagged for the purpose of evaluation before applying the three techniques. Robust filtering and predictive filtering decreased the number of instances with implausible values, but substantially decreased the size of the data set too. The filtering and polish technique actually increased the number of implausible values, but it did not reduce the size of the data set. Since the data set contained historical software project data, it was not possible to know the real extent of noise detected. This led to the production of simulated software engineering data sets, which were modelled on the real data set used in the previous evaluations to ensure domain specific characteristics. These simulated versions of the data set were then injected with noise, such that the real extent of the noise was known. After the noise injection the three noise handling techniques were applied to allow evaluation. This procedure of simulating software engineering data sets combined the incorporation of domain specific characteristics of the real world with the control over the simulated data. This is seen as a special strength of this evaluation approach. The results of the evaluation of the simulation showed that none of the techniques performed well. Robust filtering and filtering and polish performed very poorly, and based on the results of this evaluation they would not be recommended for the task of noise reduction. The predictive filtering technique was the best performing technique in this evaluation, but it did not perform significantly well either. An exhaustive systematic literature review has been carried out investigating to what extent the empirical software engineering community has considered data quality. The findings showed that the issue of data quality has been largely neglected by the empirical software engineering community. The work in this thesis highlights an important gap in empirical software engineering. It provided clarification and distinctions of the terms noise and outliers. Noise and outliers are overlapping, but they are fundamentally different. Since noise and outliers are often treated the same in noise handling techniques, a clarification of the two terms was necessary. To investigate the capabilities of noise handling techniques a single investigation was deemed as insufficient. The reasons for this are that the distinction between noise and outliers is not trivial, and that the investigated noise cleaning techniques are derived from traditional noise handling techniques where noise and outliers are combined. Therefore three investigations were undertaken to assess the effectiveness of the three presented noise handling techniques. Each investigation should be seen as a part of a multi-pronged approach. This thesis also highlights possible shortcomings of current automated noise handling techniques. The poor performance of the three techniques led to the conclusion that noise handling should be integrated into a data cleaning process where the input of domain knowledge and the replicability of the data cleaning process are ensured.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore