553,751 research outputs found

    Formal software measurements for object-oriented business models.

    Get PDF
    This paper presents a set of metrics and pseudo-metrics for the measurement of conceptual distances in M.E.R.O.D.E. business models. The measures are developed and validated using measure and measurement theory. It is argued that this metrics set constitutes a strong formal basis for the further assessment and prediction of relevant internal and external attributes of object-oriented specifications.Keywords: object type, business model, conceptual distance, measure theory, measurement theory, metric, pseudo-metric, scale type, measure validation.Measurement; Model; Models; Software;

    Measurement of Cognitive Functional Sizes of Software.

    Get PDF
    One of the major issues in software engineering is the measurement. Since traditional measurement theory has problem in defining empirical observations on software entities in terms of their measured quantities, Morasca tried to solve this problem by proposing Weak Measurement theory. Further, in calculating complexity of software, the emphasis is mostly given to the computational complexity, algorithm complexity, functional complexity, which basically estimates the time, efforts, computability and efficiency. On the other hand, © 2013, 22 pp. understandability and compressibility of the software which involves the human interaction are neglected in existing complexity measures. Recently, cognitive complexity (CC) to calculate the architectural and operational complexity of software was proposed to fill this gap. In this paper, we evaluated CC against the principle of weak measurement theory. We find that, the approach for measuring CC is more realistic and practical in comparison to existing approaches and satisfies most of the parameters required from measurement theory.One of the major issues in software engineering is the measurement. Since traditional measurement theory has problem in defining empirical observations on software entities in terms of their measured quantities, Morasca tried to solve this problem by proposing Weak Measurement theory. Further, in calculating complexity of software, the emphasis is mostly given to the computational complexity, algorithm complexity, functional complexity, which basically estimates the time, efforts, computability and efficiency. On the other hand, © 2013, 22 pp. understandability and compressibility of the software which involves the human interaction are neglected in existing complexity measures. Recently, cognitive complexity (CC) to calculate the architectural and operational complexity of software was proposed to fill this gap. In this paper, we evaluated CC against the principle of weak measurement theory. We find that, the approach for measuring CC is more realistic and practical in comparison to existing approaches and satisfies most of the parameters required from measurement theory

    Applying metrics to rule-based systems

    Get PDF
    Since the introduction of software measurement theory in the early seventies it has been accepted that in order to control software it must first be measured. Unambiguous and reproducible measurements are considered to be the most useful in controlling software productivity, costs and quality, and diverse sets of measurements are required to cover all aspects of software. This paper focuses on measures for rule-based language systems and also describes a process for developing measures for other non-standard 3GL development tools. This paper uses KEL as an example and the method allows the re-use of existing measures and indicates if and where new measures are required. As software engineering continues to generate more diverse methods of system development, it is important to continually update methods of measurement and contro

    Interprofessional partnerships in chronic illness care: a conceptual model for measuring partnership effectiveness

    Get PDF
    Introduction: Interprofessional health and social service partnerships (IHSSP) are internationally acknowledged as integral for comprehensive chronic illness care. However, the evidence-base for partnership effectiveness is lacking. This paper aims to clarify partnership measurement issues, conceptualize IHSSP at the front-line staff level, and identify tools valid for group process measurement. <br><br> Theory and methods: A systematic literature review utilizing three interrelated searches was conducted. Thematic analysis techniques were supported by NVivo 7 software. Complexity theory was used to guide the analysis, ground the new conceptualization and validate the selected measures. Other properties of the measures were critiqued using established criteria. <br><br> Results: There is a need for a convergent view of what constitutes a partnership and its measurement. The salient attributes of IHSSP and their interorganizational context were described and grounded within complexity theory. Two measures were selected and validated for measurement of proximal group outcomes. <br><br> Conclusion: This paper depicts a novel complexity theory-based conceptual model for IHSSP of front-line staff who provide chronic illness care. The conceptualization provides the underpinnings for a comprehensive evaluative framework for partnerships. Two partnership process measurement tools, the PSAT and TCI are valid for IHSSP process measurement with consideration of their strengths and limitations

    Empirical Validation of the Usefulness of Information Theory-Based Software Metrics

    Get PDF
    Software designs consist of software components and their relationships. Graphs are abstraction of software designs. Graphs composed of nodes and hyperedges are attractive for depicting software designs. Measurement of abstractions quantify relationships that exist among components. Most conventional metrics are based on counting. In contrast, this work adopts information theory because design decisions are information. The goal of this research is to show that information theory-based metrics proposed by Allen, namely size, complexity, coupling, and cohesion, can be useful in real-world software development projects, compared to the counting-based metrics. The thesis includes three case studies with the use of global variables as the abstraction. It is observed that one can use the counting metrics for the size and coupling measures and the information metrics for the complexity and cohesion measures

    Estimating Productivity of Software Development Using the Total Factor Productivity Approach

    Get PDF
    The design, control and optimization of software engineering processes generally require the determination of performance measures such as efficiency or productivity. However, the definition and measurement of productivity is often inaccurate and differs from one method to another. On the other hand, economic theory offers a well‐grounded tool of productivity measurement. In this article, we propose a model of process productivity measurement based on the total factor productivity (TFP) approach commonly used in economics. In the first part of the article, we define productivity and its measurement. We also discuss the major data issues which have to be taken into consideration. Consequently, we apply the TFP approach in the domain of software engineering and we propose a TFP model of productivity assessment

    Applying metrics to rule-based systems

    Get PDF
    Since the introduction of software measurement theory in the early seventies it has been accepted that in order to control software it must first be measured. Unambiguous and reproducible measurements are considered to be the most useful in controlling software productivity, costs and quality, and diverse sets of measurements are required to cover all aspects of software. A set of measures for a rule-based language RULER is proposed using a process which helps identify components within software that are not currently measurable, and encourages the maximum re-use of existing software measures. The initial set of measures proposed is based on a set of basic primitive counts. These measures can then be performed with the aid of a specially built prototype static analyser R-DAT Analysis of obtained results is performed to help provide tentative acceptable ranges for these measures. It is important to ensure that measurement is performed for all newly emerging development methods, both procedural and non-procedural. As software engineering continues to generate more diverse methods of system development, it is important to continually update our methods of measurement and control. This thesis demonstrates the practicality of defining and implementing new measures for rule-based systems

    Software design measures for distributed enterprise information systems

    Get PDF
    Enterprise information systems are increasingly being developed as distributed information systems. Quality attributes of distributed information systems, as in the centralised case, should be evaluated as early and as accurately as possible in the software engineering process. In particular, software measures associated with quality attributes of such systems should consider the characteristics of modern distributed technologies. Early design decisions have a deep impact on the implementation of distributed enterprise information systems and thus, on the ultimate quality of the software as an operational entity. Due to the fact that the distributed-software engineering process affords software engineers a number of design alternatives, it is important to develop tools and guidelines that can be used to assess and compare design artefacts quantitatively. This dissertation makes a contribution to the field of Software Engineering by proposing and evaluating software design measures for distributed enterprise information systems. In previous research, measures developed for distributed software have been focused in code attributes, and thus, only provide feedback towards the end of the software engineering process. In contrast, this thesis proposes a number of specific design measures that provide quantitative information before the implementation. These measures capture attributes of the structure and behaviour of distributed information systems that are deemed important to assess their quality attributes, based on the analysis of the problem domain. The measures were evaluated theoretically and empirically as part of a well defined methodology. On the one hand, we have followed a formal framework based on the theory of measurement, in order to carry out the theoretical validation of the proposed measures. On the other hand, the suitability of the measures, to be used as indicators of quality attributes, was evaluated empirically with a robust statistical technique for exploratory research. The data sets analysed were gathered after running several experiments and replications with a distributed enterprise information system. The results of the empirical evaluation show that most of the proposed measures are correlated to the quality attributes of interest, and that most of these measures may be used, individually or in combination, for the estimation of these quality attributes-namely efficiency, reliability and maintainability. The design of a distributed information system is modelled as a combination of its structure, which reflects static characteristics, and its behaviour, which captures complementary dynamic aspects. The behavioural measures showed slightly better individual and combined results than the structural measures in the experimentation. This was in line with our expectations, since the measures were evaluated as indicators of non-functional quality attributes of the operational system. On the other hand, the structural measures provide useful feedback that is available earlier in the software engineering process. Finally, we developed a prototype application to collect the proposed measures automatically and examined typical real-world scenarios where the measures may be used to make design decisions as part of the software engineering process

    Fitting genetic models using Markov Chain Monte Carlo algorithms with BUGS

    Get PDF
    Maximum likelihood estimation techniques are widely used in twin and family studies, but soon reach computational boundaries when applied to highly complex models (e.g., models including gene-by-environment interaction and gene-environment correlation, item response theory measurement models, repeated measures, longitudinal structures, extended pedigrees). Markov Chain Monte Carlo (MCMC) algorithms are very well suited to fit complex models with hierarchically structured data. This article introduces the key concepts of Bayesian inference and MCMC parameter estimation and provides a number of scripts describing relatively simple models to be estimated by the freely obtainable BUGS software. In addition, inference using BUGS is illustrated using a data set on follicle-stimulating hormone and luteinizing hormone levels with repeated measures. The examples provided can serve as stepping stones for more complicated models, tailored to the specific needs of the individual researcher
    corecore