13 research outputs found
Knowledge restructing and the development of expertise in computer programming
This thesis reports a number of empirical studies exploring the development of expertise in computer programming. Experiments 1 and 2 are concerned with the way in which the possession of design experience can influence the perception and use of cues to various program structures. Experiment 3 examines how violations to standard conventions for constructing programs can affect the comprehension of expert, intermediate and novice subjects. Experiment 4 looks at the differences in strategy that are exhibited by subjects of varying skill level when constructing programs in different languages. Experiment 5 takes these ideas further to examine the temporal distribution of different forms of strategy during a program generation task. Experiment 6 provides evidence for salient cognitive structures derived from reaction time and error data in the context of a recognition task. Experiments 7 and 8 are concerned with the role of working memory in program generation and suggest that one aspect of expertise in the programming domain involves the acquisition of strategies for utilising display-based information. The final chapter attempts to bring these experimental findings together in terms of a model of knowledge organisation that stresses the importance of knowledge restructuring processes in the development of expertise. This is contrasted with existing models which have tended to place emphasis upon schemata acquisition and generalisation as the fundamental modes of learning associated with skill development. The work reported here suggests that a fine-grained restructuring of individual schemata takes places during the later stages of skill development. It is argued that those mechanisms currently thought to be associated with the development of expertise may not fully account for the strategic changes and the types of error typically found in the transition between novice, intermediate and expert problem solvers. This work has a number of implications for existing theories of skill acquisition. In particular, it questions the ability of such theories to account for subtle changes in the various manifestations of skilled performance that are associated with increasing expertise. Secondly, the work reported in this thesis attempts to show how specific forms of training might give rise to the knowledge restructuring process that is proposed. Finally, the thesis stresses the important role of display-based problem solving in complex tasks such as programming and highlights the role of programming language notation as a mediating factor in the development and acquisition of problem solving strategies
Recommended from our members
System architecture metrics: an evaluation
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity.
The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models.
This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer
The exploitation of parallelism on shared memory multiprocessors
PhD ThesisWith the arrival of many general purpose shared memory multiple processor
(multiprocessor) computers into the commercial arena during the mid-1980's, a
rift has opened between the raw processing power offered by the emerging
hardware and the relative inability of its operating software to effectively deliver
this power to potential users. This rift stems from the fact that, currently, no
computational model with the capability to elegantly express parallel activity is
mature enough to be universally accepted, and used as the basis for programming
languages to exploit the parallelism that multiprocessors offer. To add to this,
there is a lack of software tools to assist programmers in the processes of designing
and debugging parallel programs.
Although much research has been done in the field of programming languages,
no undisputed candidate for the most appropriate language for programming
shared memory multiprocessors has yet been found. This thesis examines why this
state of affairs has arisen and proposes programming language constructs,
together with a programming methodology and environment, to close the ever
widening hardware to software gap.
The novel programming constructs described in this thesis are intended for use
in imperative languages even though they make use of the synchronisation
inherent in the dataflow model by using the semantics of single assignment when
operating on shared data, so giving rise to the term shared values. As there are
several distinct parallel programming paradigms, matching flavours of shared
value are developed to permit the concise expression of these paradigms.The Science and Engineering Research Council
The connection machine
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.Bibliography: leaves 134-157.by William Daniel Hillis.Ph.D
An empirical investigation of Halstead's software length formula
Call number: LD2668 .R4 CMSC 1988 H83Master of ScienceComputing and Information Science
Recommended from our members
A software classification scheme
Reusing code is one approach to software reusability. Code is the end product of the software lifecycle. It is delivered in a low level representation that is difficult to reuse unless an almost perfect match exists between available features and required specifications. There is a need to organize large inventories of software such that reusable code is easy to locate and exchange. The relative success in the reuse of code fragments reported by some software factories is due in part to their capacity to encapsulate domain specific functions and create specialized libraries of components classified by these locally standardized functions.A general software classification scheme that organizes reusability related attributes and common functions from different domains is proposed as a partial solution to the software reusability problem. For the problem of selecting from similar, potentially reusable. components, a partial solution based on evaluation of common characteristics is also proposed. A library system is presented that integrates the proposed classification scheme with an evaluation mechanism based on inherent component attributes, programming languages characteristics and reuser experience.The fundamental contribution of this dissertation is a formal treatment of a faceted scheme for software classification leading to better understanding of reusability at the code level. This approach has been prototyped in a library system for the semi-automatic classification of software components. Analysis were performed to evaluate the classification scheme. The results show the potential of the scheme in organizing collections of code fragments, in improving retrieval, and in simplifying the classification process. Tests of the evaluation mechanism showed positive correlation with evaluations conducted by potential reusers