157,661 research outputs found
Towards Validating Risk Indicators Based on Measurement Theory (Extended version)
Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. For instance, in an extended enterprise this may mean over investing in service level agreements or obtaining a contract that provides a lower security level than the system requires. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk indicators that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of the measurement theory to risk indicators, we analyze the indicators used by a risk assessment method specially developed for assessing confidentiality risks in networks of organizations
An Approach for the Empirical Validation of Software Complexity Measures
Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper
empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics
Measuring Software Process: A Systematic Mapping Study
Context: Measurement is essential to reach predictable performance and high capability processes. It provides
support for better understanding, evaluation, management, and control of the development process
and project, as well as the resulting product. It also enables organizations to improve and predict its process’s
performance, which places organizations in better positions to make appropriate decisions. Objective:
This study aims to understand the measurement of the software development process, to identify studies,
create a classification scheme based on the identified studies, and then to map such studies into the scheme
to answer the research questions. Method: Systematic mapping is the selected research methodology for this
study. Results: A total of 462 studies are included and classified into four topics with respect to their focus
and into three groups based on the publishing date. Five abstractions and 64 attributes were identified,
25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the
most measured process attributes, while effort and performance were the most measured project attributes.
Goal Question Metric and Capability Maturity Model Integration were the main methods and models used
in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently
identified research contexts.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-RMinisterio de Economía y Competitividad TIN2016-76956-C3-2- RMinisterio de Economía y Competitividad TIN2015-71938-RED
WEAK MEASUREMENT THEORY AND MODIFIED COGNITIVE COMPLEXITY MEASURE
Measurement is one of the problems in the area of software engineering. Since traditional measurement
theory has a major problem in defining empirical observations on software entities in terms of their
measured quantities, Morasca has tried to solve this problem by proposing Weak Measurement theory. In
this paper, we tried to evaluate the applicability of weak measurement theory by applying it on a newly
proposed Modified Cognitive Complexity Measure (MCCM). We also investigated the applicability of
Weak Extensive Structure for deciding on the type of scale for MCCM. It is observed that the MCCM is on
weak ratio scale
Numerical and experimental investigations of self-piercing riveting
Self-pierce riveting (SPR) is a new high-speed mechanical fastening technique which is suitable for point joining dissimilar sheet materials, as well as coated and pre-painted sheet materials. With increasing application of SPR in different industrial fields, the demand for a better understanding of the knowledge of static and dynamic characteristics of the SPR joints is required. In this paper, the SPR process has been numerically simulated using the commercial finite element (FE) software LS-Dyna. For validating the numerical simulation of the SPR process, experimental tests on specimens made of aluminium alloy have been carried out. The online window monitoring technique was introdu introdu ced in the tests for evaluating the quality of SPR joints. Good agreements between the simulations and the tests have been found, both with respect to the force-travel (time) curves as well as the deformed shape on the cross-section of SPR joint. Monotonic tensile tests were carried out to measure the ultimate tensile strengths for SPR joints with different material combinations. Deformation and failure of the SPR joints under monotonic tensile loading were studied. The normal hypothesis tests were performed to examine the rationality of the test data. This work was also aimed at evaluating experimentally and comparing the strength and energy absorption of SPR joints and SPR-bonded hybrid joints
- …