5 research outputs found
Understanding software faults and their role in software reliability modeling
This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain
Recommended from our members
Real-time software metrics
This study describes the software metrics analysis of IO releases of an embedded real-time telephone switching system developed by a German telecommunications firm. The micro-controlled application was written in a C-like macro assembly language. We developed a metrics program that computes the standard complexity metrics plus a number of information flow metrics.
The releases of the real-time software satisfies published laws of software evolution, e.g. continuing change, increasing entropy, and total change is not uniform over the changed modules. The data also supports Harrison and Cook's program maintenance decision model [7]. We propose the change standard deviation as a threshold for their model.
A multi variate analysis of the metrics computed with our metric analyzer program identified four underlying complexity domains: size, information flow into functions, information flow out of functions and control flow. We also found that the information flow metrics characterize real-time complexity better than the standard software complexity metrics, e.g. Halstead's Software Science, LOC, McCabe's Cyclomatic Complexity. We also investigated the relations between programming hours for the various releases and the program changes and changes in metric values
Recommended from our members
An Analysis of the Effect of Environmental and Systems Complexity on Information Systems Failures
Companies have invested large amounts of money on information systems development. Unfortunately, not all information systems developments are successful. Software project failure is frequent and lamentable. Surveys and statistical analysis results underscore the severity and scope of software project failure. Limited research relates software structure to information systems failures. Systematic study of failure provides insights into the causes of IS failure. More importantly, it contributes to better monitoring and control of projects and enhancing the likelihood of the success of management information systems. The underlining theories and literature that contribute to the construction of theoretical framework come from general systems theory, complexity theory, and failure studies. One hundred COBOL programs from a single company are used in the analysis. The program log clearly documents the date, time, and the reasons for changes to the programs. In this study the relationships among the variables of business requirements change, software complexity, program size and the error rate in each phase of software development life cycle are tested. Interpretations of the hypotheses testing are provided as well. The data shows that analysis error and design error occur more often than programming error. Measurement criteria need to be developed at each stage of the software development cycle, especially in the early stage. The quality and reliability of software can be improved continuously. The findings from this study suggest that it is imperative to develop an adaptive system that can cope with the changes to the business environment. Further, management needs to focus on processes that improve the quality of the system design stage
Dynamical Systems
Complex systems are pervasive in many areas of science integrated in our daily lives. Examples include financial markets, highway transportation networks, telecommunication networks, world and country economies, social networks, immunological systems, living organisms, computational systems and electrical and mechanical structures. Complex systems are often composed of a large number of interconnected and interacting entities, exhibiting much richer global scale dynamics than the properties and behavior of individual entities. Complex systems are studied in many areas of natural sciences, social sciences, engineering and mathematical sciences. This special issue therefore intends to contribute towards the dissemination of the multifaceted concepts in accepted use by the scientific community. We hope readers enjoy this pertinent selection of papers which represents relevant examples of the state of the art in present day research. [...