7 research outputs found
A review and assessment of novice learning tools for problem solving and program development
There is a great demand for the development of novice learning tools to supplement classroom instruction in the areas of problem solving and program development. Research in the area of pedagogy, the psychology of programming, human-computer interaction, and cognition have provided valuable input to the development of new methodologies, paradigms, programming languages, and novice learning tools to answer this demand.
Based on the cognitive needs of novices, it is possible to postulate a set of characteristics that should comprise the components an effective novice-learning tool. This thesis will discover these characteristics and provide recommendations for the development of new learning tools. This will be accomplished with a review of the challenges that novices face, an in-depth discussion on modem learning tools and the challenges that they address, and the identification and discussion of the vital characteristics that constitute an effective learning tool based on these tools and personal ideas
Semantic Parsing of Java I/O in Novice Programs for an Online Grading System
Beginning programming students have access to sophisticated development tools that enable them to write syntactically correct code in a straightforward manner. However, code that compiles and runs can still execute poorly, or with unintended results. We present a tool, based on an open-source parser-generation product written in Java, that performs semantic analysis of novice Java code. Specifically, the present investigation concerns the semantics of Java output methods, particularly when they are enclosed within iterative structures in the language. The effort will be to guard against threats that such methods pose to system integrity and performance, intercepting them prior to runtime. The approach used here closely models the analysis a human reviewer would perform, given a printed copy of the code. The tool is an open-source product, like the parser generator, and is also written in Java. As such, it is written to be extensible. The tool will be integrated into a larger research project underway at Montclair State University which involves the development of an online grading system for students in beginning computer programming courses
Proceedings of the RESOLVE Workshop 2002
Proceedings of the RESOLVE Workshop 200
A model for systematically investigating relationships between variables that affect the performance of novice programmers
This research was motivated by an interest in novices learning to program
and a desire to understand the factors that affect their learning. The
traditional approach to performing such an investigation has been to select
factors which may be important and then perform statistical tests on a few
potential relationships. A new research model is proposed and tested to
ensure that a thorough and systematic investigation of the data is performed.
This thesis describes the data, defines the model and explains the
application and validation of the model.
The research process is managed by a control algorithm that is the heart of
the model. This algorithm is seeded by a hypothesis that connects two
variables of interest and dictates the testing of a series of hypotheses; as it
does this, it also delves deeper into the data to identify additional
relationships.
In this research the model was applied to investigate the relationships
between: learning style and achievement; programming behaviour and
achievement; and learning style and programming behaviour. Learning style
was assessed using Kolb’s Learning Style Inventory, achievement was
based on exam score and programming behaviour was extracted from a log
of student activities using a programming tool. The largest number of
significant relationships was found between aspects of behaviour and
achievement.
The model was validated by classifying the significant hypotheses based on
the research model’s tree structure, the section of the programming tool in
use and the literature. These three classification schemes provided a
structure to explore their similarities and differences. The model was thus
demonstrated to be robust and repeatable by comparing the results with
those from both using a programming tool, and expert opinion.
This research has revealed several attributes of the learning behaviour that
affected the students’ results within this group, including aspects of
timeliness and overall volume of activity. These are suitable targets for
future investigations.
The research model could be applied to other data sets where an in-depth
investigation into pairwise data is required.
Novice Programmer Errors - Analysis and Diagnostics
All programmers make errors when writing program code, and for novices the difficulty of repairing errors can be frustrating and demoralising. It is widely recognised that compiler error diagnostics can be inaccurate, imprecise, or otherwise difficult for novices to comprehend, and many approaches to mitigating the difficulty of dealing with errors are centered around the production of diagnostic messages with improved accuracy and precision, and revised wording considered more suitable for novices. These efforts have shown limited success, partially due to uncertainty surrounding the types of error that students actually have the most difficulty with - which has most commonly been assessed by categorising them according to the diagnostic message already produced - and a traditional approach to the error diagnosis process which has known limitations.
In this thesis we detail a systematic and thorough approach both to analysing which errors that are most problematic for students, and to automated diagnosis of errors. We detail a methodology for developing a category schema for errors and for classifying individual errors in student programs according to such a schema. We show that this classification results in a different picture of the distribution of error types when compared to a classification according to diagnostic messages. We formally define the severity of an error type as a product of its frequency and difficulty, and by using repair time as an indicator of difficulty we show that error types rank differently via severity
than they do by frequency alone.
Having developed a ranking of errors according to severity, we then investigate the contextual information within source code that experienced programmers can use to more accurately and precisely classify errors than compiler tools typically do. We show that, for a number of more severe errors, these techniques can be applied in an automated tool to provide better diagnostics than are provided by traditional compilers