9 research outputs found

    The consistency of empirical comparisons of regression and analogy-based software project cost prediction

    Get PDF
    OBJECTIVE - to determine the consistency within and between results in empirical studies of software engineering cost estimation. We focus on regression and analogy techniques as these are commonly used. METHOD – we conducted an exhaustive search using predefined inclusion and exclusion criteria and identified 67 journal papers and 104 conference papers. From this sample we identified 11 journal papers and 9 conference papers that used both methods. RESULTS – our analysis found that about 25% of studies were internally inconclusive. We also found that there is approximately equal evidence in favour of, and against analogy-based methods. CONCLUSIONS – we confirm the lack of consistency in the findings and argue that this inconsistent pattern from 20 different studies comparing regression and analogy is somewhat disturbing. It suggests that we need to ask more detailed questions than just: “What is the best prediction system?

    A comparative analysis of maintainability approaches for web applications

    Get PDF
    Web applications incorporate important business assets and offer a convenient way for businesses to promote their services through the internet. Many of these web applications have evolved from simple HTML pages to complex applications that have high maintenance cost. The high maintenance cost of web applications is due to the inherent characteristics of web applications, to the fast internet evolution and to the pressing market which imposes short development cycles and frequent modifications. In order to control the maintenance cost, quantitative metrics and models for predicting web applications' maintainability must be used. Since, web applications are different from traditional software systems, models and metrics for traditional systems can not be applied to web applications. The reason for that is that web applications have special features such as hypertext structure, dynamic code generation and heterogenousity that can not be captured by traditional and object-oriented metrics. In this paper, we will provide a comparative analysis of the different approaches for predicting web applications

    Design metrics for web application maintainability measurement

    Get PDF
    Many web applications have evolved from simple HTML pages to complex applications that have a high maintenance cost. This high maintenance cost is due to the heterogeneity of web applications, to fast Internet evolution and the fast- moving market which imposes short development cycles and frequent modifications. In order to control the maintenance cost, quantitative metrics for predicting web applications maintainability must be used. This paper provides an exploratory study for new design metrics used for measuring the maintainability of web applications from class diagrams. The metrics are based on Web Application Extension (WAE)for UML and will measure the following design attributes: size, complexity, coupling and reusability. In this study the metrics are applied to two web applications from the telecommunications domain

    Risk based analogy for e-business estimation

    Get PDF

    Pragmatic cost estimation for web applications

    Get PDF
    Cost estimation for web applications is an interesting and difficult challenge for researchers and industrial practitioners. It is a particularly valuable area of ongoing commercial research. Attaining on accurate cost estimation for web applications is an essential element in being able to provide competitive bids and remaining successful in the market. The development of prediction techniques over thirty years ago has contributed to several different strategies. Unfortunately there is no collective evidence to give substantial advice or guidance for industrial practitioners. Therefore to address this problem, this thesis shows the way by investigating the characteristics of the dataset by combining the literature review and industrial survey findings. The results of the systematic literature review, industrial survey and an initial investigation, have led to an understanding that dataset characteristics may influence the cost estimation prediction techniques. From this, an investigation was carried out on dataset characteristics. However, in the attempt to structure the characteristics of dataset it was found not to be practical or easy to get a defined structure of dataset characteristics to use as a basis for prediction model selection. Therefore the thesis develops a pragmatic cost estimation strategy based on collected advice and general sound practice in cost estimation. The strategy is composed of the following five steps: test whether the predictions are better than the means of the dataset; test the predictions using accuracy measures such as MMRE, Pred and MAE knowing their strengths and weaknesses; investigate the prediction models formed to see if they are sensible and reasonable model; perform significance testing on the predictions; and get the effect size to establish preference relations of prediction models. The results from this pragmatic cost estimation strategy give not only advice on several techniques to choose from, but also give reliable results. Practitioners can be more confident about the estimation that is given by following this pragmatic cost estimation strategy. It can be concluded that the practitioners should focus on the best strategy to apply in cost estimation rather than focusing on the best techniques. Therefore, this pragmatic cost estimation strategy could help researchers and practitioners to get reliable results. The improvement and replication of this strategy over time will produce much more useful and trusted results.Cost estimation for web applications is an interesting and difficult challenge for researchers and industrial practitioners. It is a particularly valuable area of ongoing commercial research. Attaining on accurate cost estimation for web applications is an essential element in being able to provide competitive bids and remaining successful in the market. The development of prediction techniques over thirty years ago has contributed to several different strategies. Unfortunately there is no collective evidence to give substantial advice or guidance for industrial practitioners. Therefore to address this problem, this thesis shows the way by investigating the characteristics of the dataset by combining the literature review and industrial survey findings. The results of the systematic literature review, industrial survey and an initial investigation, have led to an understanding that dataset characteristics may influence the cost estimation prediction techniques. From this, an investigation was carried out on dataset characteristics. However, in the attempt to structure the characteristics of dataset it was found not to be practical or easy to get a defined structure of dataset characteristics to use as a basis for prediction model selection. Therefore the thesis develops a pragmatic cost estimation strategy based on collected advice and general sound practice in cost estimation. The strategy is composed of the following five steps: test whether the predictions are better than the means of the dataset; test the predictions using accuracy measures such as MMRE, Pred and MAE knowing their strengths and weaknesses; investigate the prediction models formed to see if they are sensible and reasonable model; perform significance testing on the predictions; and get the effect size to establish preference relations of prediction models. The results from this pragmatic cost estimation strategy give not only advice on several techniques to choose from, but also give reliable results. Practitioners can be more confident about the estimation that is given by following this pragmatic cost estimation strategy. It can be concluded that the practitioners should focus on the best strategy to apply in cost estimation rather than focusing on the best techniques. Therefore, this pragmatic cost estimation strategy could help researchers and practitioners to get reliable results. The improvement and replication of this strategy over time will produce much more useful and trusted results

    A novel model for improving the maintainability of web-based systems

    Get PDF
    Web applications incorporate important business assets and offer a convenient way for businesses to promote their services through the internet. Many of these web applica- tions have evolved from simple HTML pages to complex applications that have a high maintenance cost. This is due to the inherent characteristics of web applications, to the fast internet evolution and to the pressing market which imposes short development cycles and frequent modifications. In order to control the maintenance cost, quantita- tive metrics and models for predicting web applications’ maintainability must be used. Maintainability metrics and models can be useful for predicting maintenance cost, risky components and can help in assessing and choosing between different software artifacts. Since, web applications are different from traditional software systems, models and met- rics for traditional systems can not be applied with confidence to web applications. Web applications have special features such as hypertext structure, dynamic code generation and heterogenousity that can not be captured by traditional and object-oriented metrics. This research explores empirically the relationships between new UML design met- rics based on Conallen’s extension for web applications and maintainability. UML web design metrics are used to gauge whether the maintainability of a system can be im- proved by comparing and correlating the results with different measures of maintain- ability. We studied the relationship between our UML metrics and the following main- tainability measures: Understandability Time (the time spent on understanding the soft- ware artifact in order to complete the questionnaire), Modifiability Time(the time spent on identifying places for modification and making those modifications on the software artifact), LOC (absolute net value of the total number of lines added and deleted for com- ponents in a class diagram), and nRev (total number of revisions for components in a class diagram). Our results gave an indication that there is a possibility for a relationship to exist between our metrics and modifiability time. However, the results did not show statistical significance on the effect of the metrics on understandability time. Our results showed that there is a relationship between our metrics and LOC(Lines of Code). We found that the following metrics NAssoc, NClientScriptsComp, NServerScriptsComp, and CoupEntropy explained the effort measured by LOC(Lines of Code). We found that NC, and CoupEntropy metrics explained the effort measured by nRev(Number of Revi- sions). Our results give a first indication of the usefulness of the UML design metrics, they show that there is a reasonable chance that useful prediction models can be built from early UML design metrics

    Evaluating readability as a factor in information security policies

    Get PDF
    This thesis was previously held under moratorium from 26/11/19 to 26/11/21Policies should be treated as rules or principles that individuals can readily comprehend and follow as a pre-requisite to any organisational requirement to obey and enact regulations. This dissertation attempts to highlight one of the important factors to consider before issuing any policy that staff members are required to follow. Presently, there is no ready mechanism for estimating the likely efficacy of such policies across an organisation. One factor that has a plausible impact upon the comprehensibility of policies is their readability. Researchers have designed a number of software readability metrics that evaluate how difficult a passage is to comprehend; yet, little is known about the impact of readability on the interpretation of information security policies and whether analysis of readability may prove to be a useful insight. This thesis describes the first study to investigate the feasibility of applying readability metrics as an indicator of policy comprehensibility through a mixed methods approach, with the formulation and implementation of a seven phase sequential exploratory fully mixed methods design. Each one was established in light of the outcomes from the previous phase. The methodological approach of this research study is one of the distinguishing characteristics reported in the thesis, which was as follows: * eight policies were selected (from a combination of academia and industry sector institutes); * specialists were requested their insights on key policy elements; * focus group interviews were conducted; * comprehension tests were developed (Cloze tests); * a pilot study of comprehension tests was organised (preceded by a small-scale test); * a main study of comprehension tests was performed with 600 participants and reduce that for validation to 396; * a comparison was made of comprehension results against readability metrics. The results reveal that the traditional readability metrics are ineffective in predicting human estimation. Nevertheless, readability, as measured using a bespoke readability metric, may yield useful insight upon the likely difficulty that end-users may face in comprehending a written text. Thereby, our study aims to provide an effective approach to enhancing the comprehensibility of information security policies and afford a facility for future research in this area. The research contributes to our understanding of readability in general and offering an optimal technique to measure the readability in particular. We recommend immediate corrective actions to enhance the ease of comprehension for information security policies. In part, this may reduce instances where users avoid fully reading the information security policies, and may also increase the likelihood of user compliance. We suggest that the application of appropriately selected readability assessment may assist policy makers to test their draft policies for ease of comprehension before policy release. Indeed, there may be grounds for a readability compliance test that future information security policies must satisfy.Policies should be treated as rules or principles that individuals can readily comprehend and follow as a pre-requisite to any organisational requirement to obey and enact regulations. This dissertation attempts to highlight one of the important factors to consider before issuing any policy that staff members are required to follow. Presently, there is no ready mechanism for estimating the likely efficacy of such policies across an organisation. One factor that has a plausible impact upon the comprehensibility of policies is their readability. Researchers have designed a number of software readability metrics that evaluate how difficult a passage is to comprehend; yet, little is known about the impact of readability on the interpretation of information security policies and whether analysis of readability may prove to be a useful insight. This thesis describes the first study to investigate the feasibility of applying readability metrics as an indicator of policy comprehensibility through a mixed methods approach, with the formulation and implementation of a seven phase sequential exploratory fully mixed methods design. Each one was established in light of the outcomes from the previous phase. The methodological approach of this research study is one of the distinguishing characteristics reported in the thesis, which was as follows: * eight policies were selected (from a combination of academia and industry sector institutes); * specialists were requested their insights on key policy elements; * focus group interviews were conducted; * comprehension tests were developed (Cloze tests); * a pilot study of comprehension tests was organised (preceded by a small-scale test); * a main study of comprehension tests was performed with 600 participants and reduce that for validation to 396; * a comparison was made of comprehension results against readability metrics. The results reveal that the traditional readability metrics are ineffective in predicting human estimation. Nevertheless, readability, as measured using a bespoke readability metric, may yield useful insight upon the likely difficulty that end-users may face in comprehending a written text. Thereby, our study aims to provide an effective approach to enhancing the comprehensibility of information security policies and afford a facility for future research in this area. The research contributes to our understanding of readability in general and offering an optimal technique to measure the readability in particular. We recommend immediate corrective actions to enhance the ease of comprehension for information security policies. In part, this may reduce instances where users avoid fully reading the information security policies, and may also increase the likelihood of user compliance. We suggest that the application of appropriately selected readability assessment may assist policy makers to test their draft policies for ease of comprehension before policy release. Indeed, there may be grounds for a readability compliance test that future information security policies must satisfy
    corecore