2,329,030 research outputs found
Recommended from our members
Evaluation of software dependability
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay – i.e. dependability evaluation – a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components – the consequences of the ‘perversity of nature’. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures
Model-driven performance evaluation for service engineering
Service engineering and service-oriented architecture as an
integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of
measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised
for the empirical performance evaluation of model-driven developed service systems
Designing software to maximize learning1
This paper starts from the assumption that any evaluation of educational software should focus on whether or not, and the extent to which, it maximizes learning. It is particularly concerned with the impact of software on the quality of learning. The paper reviews key texts in the literature on learning, including some which relate directly to software development, and suggests ways in which a range of learning theories can inform the process of software design. The paper sets out to make a contribution to both the design and the evaluation of educational software
A software technology evaluation program
A set of quantitative approaches is presented for evaluating software development methods and tools. The basic idea is to generate a set of goals which are refined into quantifiable questions which specify metrics to be collected on the software development and maintenance process and product. These metrics can be used to characterize, evaluate, predict, and motivate. They can be used in an active as well as passive way by learning form analyzing the data and improving the methods and tools based upon what is learned from that analysis. Several examples were given representing each of the different approaches to evaluation. The cost of the approaches varied inversely with the level of confidence in the interpretation of the results
HALOE test and evaluation software
Computer programming, system development and analysis efforts during this contract were carried out in support of the Halogen Occultation Experiment (HALOE) at NASA/Langley. Support in the major areas of data acquisition and monitoring, data reduction and system development are described along with a brief explanation of the HALOE project. Documented listings of major software are located in the appendix
Early evaluation of security functionality in software projects - some experience on using the common criteria in a quality management process
This paper documents the experiences of assurance evaluation during the early stage of a large software development project. This project researches, contracts and integrates privacy-respecting software to business environments. While assurance evaluation with ISO 15408 Common Criteria (CC) within the certification schemes is done after a system has been completed, our approach executes evaluation during the early phases of the software life cycle. The promise is to increase quality and to reduce testing and fault removal costs for later phases of the development process. First results from the still-ongoing project suggests that the Common Criteria can define a framework for assurance evaluation in ongoing development projects.Dieses Papier dokumentiert den Versuch, mittels der Common Criteria nach ISO 15408 bereits während der Erstellung eines Softwaresystems dessen Sicherheitseigenschaften zu überprüfen. Dies geschieht im Gegensatz zur üblichen Post-Entwicklungs-Evaluation
Systematic evaluation of software product line architectures
The architecture of a software product line is one of its most important artifacts as it represents an abstraction of the products that can be generated. It is crucial to evaluate the quality attributes of a product line architecture in order to: increase the productivity of the product line process and the quality of the products; provide a means to understand the potential behavior of the products and, consequently, decrease their time to market; and, improve the handling of the product line variability. The evaluation of product line architecture can serve as a basis to analyze the managerial and economical values of a product line for software managers and architects. Most of the current research on the evaluation of product line architecture does not take into account metrics directly obtained from UML models and their variabilities; the metrics used instead are difficult to be applied in general and to be used for quantitative analysis. This paper presents a Systematic Evaluation Method for UML-based Software Product Line Architecture, the SystEM-PLA. SystEM-PLA differs from current research as it provides stakeholders with a means to: (i) estimate and analyze potential products; (ii) use predefined basic UML-based metrics to compose quality attribute metrics; (iii) perform feasibility and trade-off analysis of a product line architecture with respect to its quality attributes; and, (iv) make the evaluation of product line architecture more flexible. An example using the SEI’s Arcade Game Maker (AGM) product line is presented as a proof of concept, illustrating SystEM-PLA activities. Metrics for complexity and extensibility quality attributes are defined and used to
perform a trade-off analysis
- …