128,755 research outputs found
Measurement of Cognitive Functional Sizes of Software.
One of the major issues in software engineering is the measurement. Since traditional measurement theory has problem in defining empirical observations on software entities in terms of their measured quantities, Morasca tried to solve this problem by proposing Weak Measurement theory. Further, in calculating complexity of software, the emphasis is mostly given to the computational complexity, algorithm complexity, functional complexity, which basically estimates the time, efforts, computability and efficiency. On the other hand,
© 2013, 22 pp.
understandability and compressibility of the software which
involves the human interaction are neglected in existing
complexity measures. Recently, cognitive complexity (CC) to
calculate the architectural and operational complexity of
software was proposed to fill this gap. In this paper, we
evaluated CC against the principle of weak measurement theory.
We find that, the approach for measuring CC is more realistic
and practical in comparison to existing approaches and satisfies
most of the parameters required from measurement theory.One of the major issues in software engineering is the measurement. Since traditional measurement theory has problem in defining empirical observations on software entities in terms of their measured quantities, Morasca tried to solve this problem by proposing Weak Measurement theory. Further, in calculating complexity of software, the emphasis is mostly given to the computational complexity, algorithm complexity, functional complexity, which basically estimates the time, efforts, computability and efficiency. On the other hand,
© 2013, 22 pp.
understandability and compressibility of the software which
involves the human interaction are neglected in existing
complexity measures. Recently, cognitive complexity (CC) to
calculate the architectural and operational complexity of
software was proposed to fill this gap. In this paper, we
evaluated CC against the principle of weak measurement theory.
We find that, the approach for measuring CC is more realistic
and practical in comparison to existing approaches and satisfies
most of the parameters required from measurement theory
A Complexity Measure Based on Cognitive Weights
Cognitive Informatics plays an important role in understanding the fundamental characteristics of software. This paper proposes a model of the fundamental characteristics of software, complexity in terms of cognitive weights of basic control structures. Cognitive weights are degree of difficulty or relative time and effort required for comprehending a given piece of software, which satisfy the definition of complexity. An attempt has also been made to prove the robustness of proposed complexity measure by comparing it with the other measures based on cognitive informatics
WEAK MEASUREMENT THEORY AND MODIFIED COGNITIVE COMPLEXITY MEASURE
Measurement is one of the problems in the area of software engineering. Since traditional measurement
theory has a major problem in defining empirical observations on software entities in terms of their
measured quantities, Morasca has tried to solve this problem by proposing Weak Measurement theory. In
this paper, we tried to evaluate the applicability of weak measurement theory by applying it on a newly
proposed Modified Cognitive Complexity Measure (MCCM). We also investigated the applicability of
Weak Extensive Structure for deciding on the type of scale for MCCM. It is observed that the MCCM is on
weak ratio scale
An Approach for the Empirical Validation of Software Complexity Measures
Software metrics are widely accepted tools to control and assure software quality. A large number of software metrics with a variety of content can be found in the literature; however most of them are not adopted in industry as they are seen as irrelevant to needs, as they are unsupported, and the major reason behind this is due to improper
empirical validation. This paper tries to identify possible root causes for the improper empirical validation of the software metrics. A practical model for the empirical validation of software metrics is proposed along with root causes. The model is validated by applying it to recently proposed and well known metrics
Weighted Class Complexity: A Measure of Complexity for Object Oriented System
Software complexity metrics are used to predict critical information about reliability and maintainability of software systems. Object oriented software development requires a different approach to software complexity metrics. In this paper, we propose a metric to compute the structural and cognitive complexity of class by associating a weight to the class, called as Weighted Class Complexity (WCC). On the contrary, of the other metrics used for object oriented systems, proposed metric calculates the complexity of a class due to methods and attributes in terms of cognitive weight. The proposed metric has been demonstrated with OO examples. The theoretical and practical evaluations based on the information theory have shown that the proposed metric is on ratio scale
and satisfies most of the parameters required by the measurement theor
Use of functional near-infrared spectroscopy to evaluate cognitive change when using healthcare simulation tools
This is an accepted manuscript of an article published by BMJ on 01/11/2020, available online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8936993/ The accepted version of the publication may differ from the final published version.Background The use of brain imaging techniques in healthcare simulation is relatively rare. However, the use of mobile, wireless technique, such as functional nearinfrared spectroscopy (fNIRS), is becoming a useful tool for assessing the unique demands of simulation learning. For this study, this imaging technique was used to evaluate cognitive load during simulation learning events.
Methods This study took place in relation to six simulation activities, paired for similarity, and evaluated
comparative cognitive change between the three task pairs. The three paired tasks were: receiving a (1) face-toface and (2) video patient handover; observing a simulated scene in (1) two dimensions and (2) 360° field of vision; and on a simulated patient (1) taking a pulse and (2) taking a pulse and respiratory rate simultaneously. The total number of participants was n=12.
Results In this study, fNIRS was sensitive to variations in task difficulty in common simulation tools and scenarios, showing an increase in oxygenated haemoglobin concentration and a decrease in deoxygenated haemoglobin concentration, as tasks increased in cognitive load.
Conclusion Overall, findings confirmed the usefulness of neurohaemoglobin concentration markers as an evaluation tool of cognitive change in healthcare simulation. Study findings suggested that cognitive load increases in more complex cognitive tasks in simulation learning events. Task performance that increased in complexity therefore affected cognitive markers, with increase in mental effort required
Estimation of Defect proneness Using Design complexity Measurements in Object- Oriented Software
Software engineering is continuously facing the challenges of growing
complexity of software packages and increased level of data on defects and
drawbacks from software production process. This makes a clarion call for
inventions and methods which can enable a more reusable, reliable, easily
maintainable and high quality software systems with deeper control on software
generation process. Quality and productivity are indeed the two most important
parameters for controlling any industrial process. Implementation of a
successful control system requires some means of measurement. Software metrics
play an important role in the management aspects of the software development
process such as better planning, assessment of improvements, resource
allocation and reduction of unpredictability. The process involving early
detection of potential problems, productivity evaluation and evaluating
external quality factors such as reusability, maintainability, defect proneness
and complexity are of utmost importance. Here we discuss the application of CK
metrics and estimation model to predict the external quality parameters for
optimizing the design process and production process for desired levels of
quality. Estimation of defect-proneness in object-oriented system at design
level is developed using a novel methodology where models of relationship
between CK metrics and defect-proneness index is achieved. A multifunctional
estimation approach captures the correlation between CK metrics and defect
proneness level of software modules.Comment: 5 pages, 1 figur
Tasks, cognitive agents, and KB-DSS in workflow and process management
The purpose of this paper is to propose a nonparametric interest rate term structure model and investigate its implications on term structure dynamics and prices of interest rate derivative securities. The nonparametric spot interest rate process is estimated from the observed short-term interest rates following a robust estimation procedure and the market price of interest rate risk is estimated as implied from the historical term structure data. That is, instead of imposing a priori restrictions on the model, data are allowed to speak for themselves, and at the same time the model retains a parsimonious structure and the computational tractability. The model is implemented using historical Canadian interest rate term structure data. The parametric models with closed form solutions for bond and bond option prices, namely the Vasicek (1977) and CIR (1985) models, are also estimated for comparison purpose. The empirical results not only provide strong evidence that the traditional spot interest rate models and market prices of interest rate risk are severely misspecified but also suggest that different model specifications have significant impact on term structure dynamics and prices of interest rate derivative securities.
- …