5,438 research outputs found

    A research review of quality assessment for software

    Get PDF
    Measures were recommended to assess the quality of software submitted to the AdaNet program. The quality factors that are important to software reuse are explored and methods of evaluating those factors are discussed. Quality factors important to software reuse are: correctness, reliability, verifiability, understandability, modifiability, and certifiability. Certifiability is included because the documentation of many factors about a software component such as its efficiency, portability, and development history, constitute a class for factors important to some users, not important at all to other, and impossible for AdaNet to distinguish between a priori. The quality factors may be assessed in different ways. There are a few quantitative measures which have been shown to indicate software quality. However, it is believed that there exists many factors that indicate quality and have not been empirically validated due to their subjective nature. These subjective factors are characterized by the way in which they support the software engineering principles of abstraction, information hiding, modularity, localization, confirmability, uniformity, and completeness

    Evaluating defect prediction approaches: a benchmark and an extensive comparison

    Get PDF
    Reliably predicting software defects is one of the holy grails of software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches varying in terms of accuracy, complexity and the input data they require. However, the absence of an established benchmark makes it hard, if not impossible, to compare approaches. We present a benchmark for defect prediction, in the form of a publicly available dataset consisting of several software systems, and provide an extensive comparison of well-known bug prediction approaches, together with novel approaches we devised. We evaluate the performance of the approaches using different performance indicators: classification of entities as defect-prone or not, ranking of the entities, with and without taking into account the effort to review an entity. We performed three sets of experiments aimed at (1) comparing the approaches across different systems, (2) testing whether the differences in performance are statistically significant, and (3) investigating the stability of approaches across different learners. Our results indicate that, while some approaches perform better than others in a statistically significant manner, external validity in defect prediction is still an open problem, as generalizing results to different contexts/learners proved to be a partially unsuccessful endeavo

    Analysis of Slice-Based Metrics for Aspect-Oriented Programs

    Get PDF
    To improve separation of concerns in software design and implementation, the technique of Aspect-Oriented Programming (AOP) was introduced. But AOP has a lot of features like aspects, advices, point-cuts, join-points etc., and because of these the usage of the existing intermediate graph representations is rendered useless. In our work we have defined a new intermediate graph representation for AOP. The construction of SDG is automated by analysing the bytecode of aspect-oriented programs that incorporates the representation of aspect-oriented features. After constructing the SDG, we propose a slicing algorithm that uses the intermediate graph and computes slices for a given AOP. Program slicing has numerous applications in software engineering activities like debugging, testing, maintenance, model checking etc. To implement our proposed slicing technique, we have developed a prototype tool that takes an AOP as input and compute its slices using our proposed slicing algorithm. To evaluate our proposed technique, we have considered some case studies by taking open source projects. The comparative study of our proposed slicing algorithm with some existing algorithms show that our approach is an efficient and scalable approach of slicing for different applications with respect to aspect-oriented programs. Software metrics are used to measure certain aspects of software. Using the slicing approach we have computed eight software metrics which quantitatively and qualitatively analyse the whole aspect project. We have compiled a metrics suite for AOP and an automated prototype tool is developed for helping the process of SDLC

    MEASURING STABILITY OF OBJECT-ORIENTED SOFTWARE PACKAGES

    Get PDF

    Software Engineering Laboratory Series: Collected Software Engineering Papers

    Get PDF
    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document
    corecore