11 research outputs found
A Theory Of Small Program Complexity
Small programs are those which are written and understood by one. person. Large software systems usually consist of many small programs. The complexity of a small program is a prediction of how difficult it would be for someone to understand the program. This complexity depends of three factors: (1) the size and interelationships of the program itself; (2) the size and interelationships of the internal model of the program\u27s purpose held by the person trying to understand the program; and (3) the complexity of the mapping between the model and the program. A theory of small program complexity based on these three factors is presented. The theory leads to several testable predictions. Experiments are described which test these predictions and whose results could verify or destroy the theory. © 1982, ACM. All rights reserved
An experiment in software reliability
The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay
Recommended from our members
Are deeply nested conditionals less readable?
There is some debate over the effect of using deeply nested control structures upon programmer comprehension. In order to test the effect of deeply nested IF-THEN-ELSE statements, we split 148 computer science students of varing backgrounds into two groups. One group received a listing of a program that made excessive use of deeply nested control structures. The other group received the listing of a functionally equivalent program that did not make use of deeply nested IF-THEN-ELSEs. Both groups answered the same list of questions about the program they were assigned. The results indicate no significant difference in the average performance on the questions between the two groups.Keywords: Program Comprehension; Program Complexity; Control Flow Complexity; Experimental Computer Scienc
Recommended from our members
Investigating software complexity : knot count thresholds
This study concentrates on threshold values for the two most popular control flow metrics: McCabe's cyclomatic complexity and the knot count. We describe the results of an experimental study to empirically determine a threshold value for knot count for student programmers. The experiment was designed to measure the interaction between difficulty, as measured by knot count, and comprehension quiz scores. This experiment had two goals:
1. Show that there are threshold values for the knot count metric.
2. Discover knot count threshold values for students in Pascal and C
Predicting Student Performance In A Beginning Computer Science Class (Piaget, Personality, Cognitive Style)
Pupose of the Study. The purpose of this study was to determine factors which effectively predict success in a first course for computer science majors. A secondary goal was to provide a model of the successful computer science student in order to improve teaching and learning in the classroom; Procedures. The sample consisted of 58 students enrolled in all three sections of Computer Science I, during Spring semester, 1985. Student characteristics selected included age, sex, previous high school and college grades, number of high school and college mathematics classes, number of hours worked, and whether the job was computer-related or involved programming. A measure of Piagetian cognitive development developed by Kurtz, the Group Embedded Figures Test (GEFT) and the Myers-Briggs Personality Index (MBTI) were administered early in the semester. These measures were correlated with the student\u27s letter grade in the class using both Chi Square and Pearson\u27s Product Moment Coefficient statistical tests; Findings. Significant relationships were found between grade and the students\u27 previous college grades and the number of high school mathematics classes (p \u3c .05). The correlation between grade, and both number of hours worked and working as a programmer, approached significance (p \u3c .10). Both the Group Embedded Figures Test (p \u3c .01) and the measure of Piagetian Intellectual Development stages (p \u3c .05) were also significantly correlated with grade in this rigorous Pascal programming class; While there was no relationship between the personality type and grade, the Myers-Briggs results provided an interesting profile of the computer science major. On the Extroversion-Introversion, Sensing-Intuitive, and Thinking-Feeling indices, the students were considerably more introverted, intuitive and thinking than the population as a whole, though they were close to national norms on the Perception-Judging index. While computer science students were somewhat like engineering students, they more strongly resembled chess players, when these results were compared with other studies
Essential competencies of exceptional professional software engineers
Department Head: Rodney R. Oldehoeft.1991 Fall.Includes bibliographical references (pages 141-144).This dissertation presents a differential study of exceptional and non-exceptional professional software engineers in the work environment. The first phase of the study reports an in-depth review of 20 engineers. The study reports biographical data, Myers-Briggs Type Indicator test results, and Critical Incident Interview data for 10 exceptional and 10 non-exceptional subjects. Phase 1 concludes with a description of 38 essential competencies of software engineers. Phase 2 of this study surveys 129 engineers. Phase 2 reports biographical data for the sample and concludes that the only simple demographic predictor of performance is years of experience in software. This variable is able to correctly classify 63% of the cases studied. Phase 2 also has the participants complete a Q-Sort of the 38 competencies identified in Phase 1. Nine of these competencies are differentially related to engineer performance. A10 variable Canonical Discriminant Function is derived which is capable of correctly classifying 81% of the cases studied. This function consists of three biographical variables and seven competencies. The competencies related to Personal Attributes and Interpersonal Skills are identified as the most significant factors contributing to performance differences
Quality of Design, Analysis and Reporting of Software Engineering Experiments:A Systematic Review
Background: Like any research discipline, software engineering research must be of a certain quality to be valuable. High quality research in software engineering ensures that knowledge is accumulated and helpful advice is given to the industry. One way of assessing research quality is to conduct systematic reviews of the published research literature.
Objective: The purpose of this work was to assess the quality of published experiments in software engineering with respect to the validity of inference and the quality of reporting. More specifically, the aim was to investigate the level of statistical power, the analysis of effect size, the handling of selection bias in quasi-experiments, and the completeness and consistency of the reporting of information regarding subjects, experimental settings, design, analysis, and validity. Furthermore, the work aimed at providing suggestions for improvements, using the potential deficiencies detected as a basis. Method: The quality was assessed by conducting a systematic review of the 113 experiments published in nine major software engineering journals and three conference proceedings in the decade 1993-2002.
Results: The review revealed that software engineering experiments were generally designed with unacceptably low power and that inadequate attention was paid to issues of statistical power. Effect sizes were sparsely reported and not interpreted with respect to their practical importance for the particular context. There seemed to be little awareness of the importance of controlling for selection bias in quasi-experiments. Moreover, the review revealed a need for more complete and standardized reporting of information, which is crucial for understanding software engineering experiments and judging their results.
Implications: The consequence of low power is that the actual effects of software engineering technologies will not be detected to an acceptable extent. The lack of reporting of effect sizes and the improper interpretation of effect sizes result in ignorance of the practical importance, and thereby the relevance to industry, of experimental results. The lack of control for selection bias in quasi-experiments may make these experiments less credible than randomized experiments. This is an unsatisfactory situation, because quasi-experiments serve an important role in investigating cause-effect relationships in software engineering, for example, in industrial settings. Finally, the incomplete and unstandardized reporting makes it difficult for the reader to understand an experiment and judge its results.
Conclusions: Insufficient quality was revealed in the reviewed experiments. This has implications for inferences drawn from the experiments and might in turn lead to the accumulation of erroneous information and the offering of misleading advice to the industry. Ways to improve this situation are suggested
Recommended from our members
Impact of Query Specification Mode and Problem Complexity on Query Specification Productivity of Novice Users of Database Systems
With the increased demand for the utilization of computerized information systems by business users, the need for investigating the impact of various user interfaces has been well recognized. It is usually assumed that providing the user with assistance in the usage o-f a system would significantly increase the user's productivity. There is, however, a dearth of systematic inquiry into this commonly held notion to verify its validity in a scientific fashion. The purpose of this study is to investigate the impact of system-provided user assistance and complexity level of the problem on novice users' productivity in specifying database queries. The study is theoretical in the sense that it presents an approach adopted from research in deductive database systems to attack problems concerning user interface design. It is empirical in that it conducts an experiment in a controlled laboratory setting to collect primary data for the testing of a series of hypotheses. The two independent variables are system-provided user assistance and problem complexity, while the dependent variable is the user's query specification productivity. Three measures are used as separate indicators of query specification productivity: number of syntactic errors, number of semantic errors, and time required for completing a query task. Due to the lack of a well-defined metric for user assistance, the study first presents a generic classification scheme for relational query specification. Based on this classification scheme, two quantitative metrics for measuring the amount of user assistance in terms of prompts and defaults were developed. The user assistance is operationally defined with these two metrics. Four findings emerge as significant results of the study. First, user assistance has a significant main effect on all of the three dependent measures at the 1 percent significance level. Second, problem complexity also has a significant impact on the three productivity measures at the 1 percent significance level. Third, the interaction effect of user assistance and problem complexity on the number of semantic errors and the amount of time for completion is significant at the 1 percent level. Fourth, Although this interaction effect on the number of syntactic errors is not significant at the 5 percent level, it is at the 10 percent level. More research is needed to permit a thorough understanding of the issue of user interface design. A list of topics is suggested for future research to confirm or to modify the findings of this study