3 research outputs found

    Design of Quality Model during Reengineering of Legacy System

    Get PDF
    The purpose of this paper is design such kind of model that will improve the quality of system during reengineering[1] of legacy system that why this model is known as Quality Model. During reengineering of object oriented system[2,3], the methodology is design in such a way that will create a link/bridge between problem detection and problem correction in the legacy system, as well as simultaneously improvement in object oriented design, that can be used during later reengineering and also reduces the complexity as compared to object oriented design[4,5,6]. The further design of legacy system in such a way to specifying, how branches can be selected, how behavior is preserved and how code transformation applied. Quality of model, depends upon two factor favor and disfavor, attach to each branches, software quality is directly proportional to maintenance cost. Quality model is used for two purpose sketch and blueprint. Sketch is used a thinking tool, which help developer communicates, some aspects of a system and alternative about, what are about to be done. Blueprint intends to be comprehensive and definitive. It is used for guiding the implementation

    IS Reviews 1995

    Get PDF

    Predicting software Size and Development Effort: Models Based on Stepwise Refinement

    Get PDF
    This study designed a Software Size Model and an Effort Prediction Model, then performed an empirical analysis of these two models. Each model design began with identifying its objectives, which led to describing the concept to be measured and the meta-model. The numerical assignment rules were then developed, providing a basis for size measurement and effort prediction across software engineering projects. The Software Size Model was designed to test the hypothesis that a software size measure represents the amount of knowledge acquired and stored in software artifacts, and the amount of time it took to acquire and store this knowledge. The Effort Prediction Model is based on the estimation by analogy approach and was designed to test the hypothesis that this model will produce reasonably close predictions when it uses historical data that conforms to the Software Size Model. The empirical study implemented each model, collected and recorded software size data from software engineering project deliverables, simulated effort prediction using the jack knife approach, and computed the absolute relative error and magnitude of relative error (MRE) statistics. This study resulted in 35.3% of the predictions having an MRE value at or below twenty-five percent. This result satisfies the criteria established for the study of having at least 31 % of the predictions with a MRE of25% or less. This study is significant for three reasons. First, no subjective factors were used to estimate effort. The elimination of subjective factors removes a source of error in the predictions and makes the study easier to replicate. Second, both models were described using metrology and measurement theory principles. This allows others to consistently implement the models and to modify these models while maintaining the integrity of the models\u27 objectives. Third, the study\u27s hypotheses were validated even though the software artifacts used to collect the software size data varied significantly in both content and quality. Recommendations for further study include applying the Software Size Model to other data-driven estimation models, collecting and using software size data from industry projects, looking at alternatives for how text-based software knowledge is identified and counted, and studying the impact of project cycles and project roles on predicting effort
    corecore