65,254 research outputs found
Recommended from our members
Evaluation of software dependability
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay â i.e. dependability evaluation â a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components â the consequences of the âperversity of natureâ. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures
Recommended from our members
Comments on 'Evolutionary neural network modelling for software cumulative failure time prediction' by Liang Tian and Afzel Noore [Reliability Engineering and System Safety 87 (2005) 45-51]
This paper [Tian L, Noore A. Evolutionary neural network modelling for software cumulative failure time prediction. Reliab Eng Syst Saf 2005; 87:45â51] purports to present a useful means of predicting the cumulative failure time function for software reliability growth. In fact, the nature of the âpredictionâ is too simplistic to be of use. Furthermore, the authors' claims for the accuracy of the predictions appear to be without value
The problems of assessing software reliability ...When you really need to depend on it
This paper looks at the ways in which the reliability of software can be assessed and predicted. It shows that the levels of reliability that can be claimed with scientific justification are relatively modest
Reliability and validity in comparative studies of software prediction models
Empirical studies on software prediction models do not converge with respect to the question "which prediction model is best?" The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models
Recommended from our members
Assessing Asymmetric Fault-Tolerant Software
The most popular forms of fault tolerance against design faults use "asymmetric" architectures in which a "primary" part performs the computation and a "secondary" part is in charge of detecting errors and performing some kind of error processing and recovery. In contrast, the most studied forms of software fault tolerance are "symmetric" ones, e.g. N-version programming. The latter are often controversial, the former are not. We discuss how to assess the dependability gains achieved by these methods. Substantial difficulties have been shown to exist for symmetric schemes, but we show that the same difficulties affect asymmetric schemes. Indeed, the latter present somewhat subtler problems. In both cases, to predict the dependability of the fault-tolerant system it is not enough to know the dependability of the individual components. We extend to asymmetric architectures the style of probabilistic modeling that has been useful for describing the dependability of "symmetric" architectures, to highlight factors that complicate the assessment. In the light of these models, we finally discuss fault injection approaches to estimating coverage factors. We highlight the limits of what can be predicted and some useful research directions towards clarifying and extending the range of situations in which estimates of coverage of fault tolerance mechanisms can be trusted
A nonparametric software reliability growth model
Miller and Sofer have presented a nonparametric method for estimating the failure rate of a software program. The method is based on the complete monotonicity property of the failure rate function, and uses a regression approach to obtain estimates of the current software failure rate. This completely monotone software model is extended. It is shown how it can also provide long-range predictions of future reliability growth. Preliminary testing indicates that the method is competitive with parametric approaches, while being more robust
A new procedure to analyze RNA non-branching structures
RNA structure prediction and structural motifs analysis are challenging tasks in the investigation of RNA function. We propose a novel procedure to detect structural motifs shared between two RNAs (a reference and a target). In particular, we developed two core modules: (i) nbRSSP_extractor, to assign a unique structure to the reference RNA encoded by a set of non-branching structures; (ii) SSD_finder, to detect structural motifs that the target RNA shares with the reference, by means of a new score function that rewards the relative distance of the target non-branching structures compared to the reference ones. We integrated these algorithms with already existing software to reach a coherent pipeline able to perform the following two main tasks: prediction of RNA structures (integration of RNALfold and nbRSSP_extractor) and search for chains of matches (integration of Structator and SSD_finder)
Toward the standardization of venture capital investment evaluation : decision criteria for rating investee business plans
This study examined the criteria used by venture capitalists to evaluate business plans in order to make investment decisions. A literature survey revealed two competing theories: “espoused criteria” where evaluation decisions are based on what venture capitalists say are the decisive factors; versus the use of “known attributes” that successful ventures actually possess. Brunswik’s Lens Model from Social Judgment Theory guided an empirical investigation of several different evaluation methods based on information contained in 129 business plans submitted for venture capital over a 3 year period. Data evaluation culminated in the comparison of the percentage of correct decisions (“hit-rate”) for each method. We found that decisions based on the known attributes of successful ventures have significantly better hit-rates than decisions made using espoused criteria. Discussion centred on the goal of achieving consistency in the conduct of venture analysis. Process standardization can aid in the achievement of consistency. Future research will both deepen and broaden insights.<br /
Recommended from our members
Requirements for building information modeling based lean production management systems for construction
Smooth flow of production in construction is hampered by disparity between individual trade teams' goals and the goals of stable production flow for the project as a whole. This is exacerbated by the difficulty of visualizing the flow of work in a construction project. While the addresses some of the issues in Building information modeling provides a powerful platform for visualizing work flow in control systems that also enable pull flow and deeper collaboration between teams on and off site. The requirements for implementation of a BIM-enabled pull flow construction management software system based on the Last
Planner Systemâ˘, called âKanBIMâ, have been specified, and a set of functional mock-ups of the proposed system has been implemented and evaluated in a series of three focus group workshops. The requirements cover the areas of maintenance of work flow stability, enabling negotiation and commitment between teams, lean production planning with sophisticated pull flow control, and effective communication and visualization of flow. The evaluation results show that the system holds the potential to improve work flow and reduce waste by providing both process and product visualization at the work face
- âŚ