197 research outputs found

    Weighted Class Complexity: A Measure of Complexity for Object Oriented System

    Get PDF
    Software complexity metrics are used to predict critical information about reliability and maintainability of software systems. Object oriented software development requires a different approach to software complexity metrics. In this paper, we propose a metric to compute the structural and cognitive complexity of class by associating a weight to the class, called as Weighted Class Complexity (WCC). On the contrary, of the other metrics used for object oriented systems, proposed metric calculates the complexity of a class due to methods and attributes in terms of cognitive weight. The proposed metric has been demonstrated with OO examples. The theoretical and practical evaluations based on the information theory have shown that the proposed metric is on ratio scale and satisfies most of the parameters required by the measurement theor

    Evaluation Criteria for Object-oriented Metrics

    Get PDF
    In this paper an evaluation model for object-oriented (OO) metrics is proposed. We have evaluated the existing evaluation criteria for OO metrics, and based on the observations, a model is proposed which tries to cover most of the features for the evaluation of OO metrics. The model is validated by applying it to existing OO metrics. In contrast to the other existing criteria, the proposed model is simple in implementation and includes the practical and important aspects of evaluation; hence it suitable to evaluate and validate any OO complexity metric

    A Complexity Measure Based on Cognitive Weights

    Get PDF
    Cognitive Informatics plays an important role in understanding the fundamental characteristics of software. This paper proposes a model of the fundamental characteristics of software, complexity in terms of cognitive weights of basic control structures. Cognitive weights are degree of difficulty or relative time and effort required for comprehending a given piece of software, which satisfy the definition of complexity. An attempt has also been made to prove the robustness of proposed complexity measure by comparing it with the other measures based on cognitive informatics

    WEAK MEASUREMENT THEORY AND MODIFIED COGNITIVE COMPLEXITY MEASURE

    Get PDF
    Measurement is one of the problems in the area of software engineering. Since traditional measurement theory has a major problem in defining empirical observations on software entities in terms of their measured quantities, Morasca has tried to solve this problem by proposing Weak Measurement theory. In this paper, we tried to evaluate the applicability of weak measurement theory by applying it on a newly proposed Modified Cognitive Complexity Measure (MCCM). We also investigated the applicability of Weak Extensive Structure for deciding on the type of scale for MCCM. It is observed that the MCCM is on weak ratio scale

    Cognitive Complexity Applied to Software Development: An Automated Procedure to Reduce the Comprehension Effort

    Get PDF
    The cognitive complexity of a software application determines the amount of human effort required to comprehend its internal logic, which results in a subjective measurement. The quantification process of the cognitive complexity as a metric is problematic since the factors representing the computation do not represent the exact human cognition. Therefore, the determination of cognitive complexity requires expansion beyond its quantification. The human comprehension effort related with a software application is associated with each phase of its development process. Correct requirements identification and accurate logical diagram generation prior to code implementation can lead to proper logical identification of software applications. Moreover, human comprehension is essential for software maintenance. Defect identification, correction and handling of code quality issues cannot be maintained without good comprehension. Therefore, cognitive complexity can be effectively applied to demonstrate human understandability inside the respective phases of requirements analysis, design, defect tracking, and code quality optimization. This study involved automation of the above-mentioned phases to reduce the manual human cognitive load and reduce cognitive complexity. It was found that the proposed system could enhance the average accuracy of requirements analysis and class diagram generation by 14.44% and 9.89% average accuracy incrementation through defect tracking and code quality issues compared to manual procedures

    Cognitive Complexity Applied to Software Development: An Automated Procedure to Reduce the Comprehension Effort

    Get PDF
    The cognitive complexity of a software application determines the amount of human effort required to comprehend its internal logic, which results in a subjective measurement. The quantification process of the cognitive complexity as a metric is problematic since the factors representing the computation do not represent the exact human cognition. Therefore, the determination of cognitive complexity requires expansion beyond its quantification. The human comprehension effort related with a software application is associated with each phase of its development process. Correct requirements identification and accurate logical diagram generation prior to code implementation can lead to proper logical identification of software applications. Moreover, human comprehension is essential for software maintenance. Defect identification, correction and handling of code quality issues cannot be maintained without good comprehension. Therefore, cognitive complexity can be effectively applied to demonstrate human understandability inside the respective phases of requirements analysis, design, defect tracking, and code quality optimization. This study involved automation of the above-mentioned phases to reduce the manual human cognitive load and reduce cognitive complexity. It was found that the proposed system could enhance the average accuracy of requirements analysis and class diagram generation by 14.44% and 9.89% average accuracy incrementation through defect tracking and code quality issues compared to manual procedures

    Document Type De�nition (DTD) Metrics

    Get PDF
    In this paper, we present two complexity metrics for the assessment of schema quality written in Document Type De�finition (DTD) language. Both "Entropy (E) metric: E(DTD)" and "Distinct Structured Element Repetition Scale (DSERS) metric: DSERS(DTD)" are intended to measure the structural complexity of schemas in DTD language. These metrics exploit a directed graph representation of schema document and consider the complexity of schema due to its similar structured elements and the occurrences of these elements. The empirical and theoretical validations of these metrics prove the robustness of the metrics

    Coupling Complexity Metric: A Cognitive Approach

    Full text link

    An Approach for the Empirical Validation of Software Complexity Measures

    Get PDF
    Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics
    • …
    corecore