419 research outputs found

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    On Novices\u27 Interaction with Compiler Error Messages: A Human Factors Approach

    Get PDF
    The difficulty in understanding compiler error messages can be a major impediment to novice student learning. To alleviate this issue, multiple researchers have run experiments enhancing compiler error messages in automated assessment tools for programming assignments. The conclusions reached by these published experiments appear to be conflicting. We examine these experiments and propose five potential reasons for the inconsistent conclusions concerning enhanced compiler error messages: (1) students do not read them, (2) researchers are measuring the wrong thing, (3) the effects are hard to measure, (4) the messages are not properly designed, (5) the messages are properly designed, but students do not understand them in context due to increased cognitive load. We constructed mixed-methods experiments designed to address reasons 1 and 5 with a specific automated assessment tool, Athene, that previously reported inconclusive results. Testing student comprehension of the enhanced compiler error messages outside the context of an automated assessment tool demonstrated their effectiveness over standard compiler error messages. Quantitative results from a 60 minute one-on-one think-aloud study with 31 students did not show substantial increase in student learning outcomes over the control. However, qualitative results from the one-on-one thinkaloud study indicated that most students are reading the enhanced compiler error messages and generally make effective changes after encountering them

    Beyond Automated Assessment: Building Metacognitive Awareness in Novice Programmers in CS1

    Get PDF
    The primary task of learning to program in introductory computer science courses (CS1) cognitively overloads novices and must be better supported. Several recent studies have attempted to address this problem by understanding the role of metacognitive awareness in novices learning programming. These studies have focused on teaching metacognitive awareness to students by helping them understand the six stages of learning so students can know where they are in the problem-solving process, but these approaches are not scalable. One way to address scalability is to implement features in an automated assessment tool (AAT) that build metacognitive awareness in novice programmers. Currently, AATs that provide feedback messages to students can be said to implement the fifth and sixth learning stages integral to metacognitive awareness: implement solution (compilation) and evaluate implemented solution (test cases). The computer science education (CSed) community is actively engaged in research on the efficacy of compile error messages (CEMs) and how best to enhance them to maximize student learning and it is currently heavily disputed whether or not enhanced compile error messages (ECEMs) in AATs actually improve student learning. The discussion on the effectiveness of ECEMs in AATs remains focused on only one learning stage critical to metacognitive awareness in novices: implement solution. This research carries out an ethnomethodologically-informed study of CS1 students via think-aloud studies and interviews in order to propose a framework for designing an AAT that builds metacognitive awareness by supporting novices through all six stages of learning. The results of this study provide two important contributions. The first is the confirmation that ECEMs that are designed from a human-factors approach are more helpful for students than standard compiler error messages. The second important contribution is that the results from the observations and post-assessment interviews revealed the difficulties novice programmers often face to developing metacognitive awareness when using an AAT. Understanding these barriers revealed concrete ways to help novice programmers through all six stages of the problem-solving process. This was presented above as a framework of features, which when implemented properly, provides a scalable way to implicitly produce metacognitive awareness in novice programmers

    Relating Spatial Skills and Expression Evaluation

    Get PDF
    Work connecting spatial skills to computing has used course grades or marks, or general programming tests as the measure of computing ability. In order to map the relationship between spatial skills and computing more precisely, this paper picks out a particular subset of possible programming concepts and skills, that of expression evaluation. The paper describes the development of an expression evaluation test, which aims to identify participants' ability to perform evaluations of expressions across a range of complexity. The results indicate participants' expression evaluation ability was significantly correlated with a spatial skills test (r=0.48), even more so when only considering those with less prior programming experience (r=0.58). Thus, we have determined that spatial skills are of value in expression evaluation exercises, particularly for beginners

    The Design and Evaluation of an Educational Software Development Process for First Year Computing Undergraduates

    Get PDF
    First year, undergraduate computing students experience a series of well-known challenges when learning how to design and develop software solutions. These challenges, which include a failure to engage effectively with planning solutions prior to implementation ultimately impact upon the students’ competency and their retention beyond the first year of their studies. In the software industry, software development processes systematically guide the development of software solutions through iterations of analysis, design, implementation and testing. Industry-standard processes are, however, unsuitable for novice programmers as they require prior programming knowledge. This study investigates how a researcher-designed educational software development process could be created for novice undergraduate learners, and the impact of this process on their competence in learning how to develop software solutions. Based on an Action Research methodology that ran over three cycles, this research demonstrates how an educational software development methodology (termed FRESH) and its operationalised process (termed CADET which is a concrete implementation of the FRESH methodology), was designed and implemented as an educational tool for enhancing student engagement and competency in software development. Through CADET, students were reframed as software developers who understand the value in planning and developing software solutions, and not as programmers who prematurely try to implement solutions. While there remain opportunities to further enhance the technical sophistication of the process as it is implemented in practice, CADET enabled the software development steps of analysis and design to be explicit elements of developing software solutions, rather than their more typically implicit inclusion in introductory CS courses. The research contributes to the field of computing education by exploring the possibilities of – and by concretely generating – an appropriate scaffolded methodology and process; by illustrating the use of computational thinking and threshold concepts in software development; and by providing a novel evaluation framework (termed AKM-SOLO) to aid in the continuous improvement of educational processes and courses by measuring student learning experiences and competencies

    Identifying cognitive abilities to improve CS1 outcome

    Get PDF
    Introductory programming courses entail students’ high failure and dropout rates. In an effort to tackle this problem, we carried out a qualitative study aiming to shed some light on the programming phase that is most challenging for students, in order to elicit the specific difficulties they experience while learning to program. In doing so, distinctive cognitive abilities, differentiating subjects in terms of the way they handle programming tasks, were detected. Such aptitudes are represented in three groups of students: those who learn easily, those who never seem to fully grasp what programming requires despite true effort, and those who experience a sudden insight, making them leap from a point were they had difficulties to another where they overcome them. By interviewing teachers and students, abstraction and sequencing elaboration were found to be the two core skills for programming. These results impelled us to consider the mental models’ approach, concluding that there are very specific cognitive functions that are more favorable to learn programming and that are fostered by more adequate schemas of representing reality. Some conclusions involving Problem-based learning as a fit teaching methodology to overcome students’ difficulties are also presented

    Fostering Program Comprehension in Novice Programmers - Learning Activities and Learning Trajectories

    Get PDF
    This working group asserts that Program Comprehension (ProgComp) plays a critical part in the process of writing programs. For example, this paper is written from a basic draft that was edited and revised until it clearly presented our idea. Similarly, a program is written incrementally, with each step tested, debugged and extended until the program achieves its goal. Novice programmers should develop program comprehension skills as they learn to code so that they are able both to read and reason about code created by others, and to reflect on their code when writing, debugging or extending it. To foster such competencies our group identified two main goals: (g1) to collect and define learning activities that explicitly address key components of program comprehension and (g2) to define tentative theoretical learning trajectories that will guide teachers as they select and sequence those learning activities in their CS0/CS1/CS2 or K-12 courses. The WG has completed the first goal and laid down a strong foundation towards the second goal as presented in this report. After a thorough literature review, a detailed description of the Block Model is provided, as this model has been used with a dual purpose, to classify and present an extensive list of ProgComp tasks, and to describe a possible learning trajectory for a complex task, covering different cells of the Block Model matrix. The latter is intended to help instructors to decompose complex tasks and identify which aspects of ProgComp are being fostered

    Debugging: The Key to Unlocking the Mind of a Novice Programmer?

    Get PDF
    Novice programmers must master two skills to show lasting success: writing code and, when that fails, the ability to debug it. Instructors spend much time teaching the details of writing code but debugging gets significantly less attention. But what if teaching debugging could implicitly teach other aspects of coding better than teaching a language teaching debugging? This paper explores a new theoretical framework, the Theory of Applied Mind for Programming (TAMP), which merges dual process theory with Jerome Bruner’s theory of representations to model the mind of a programmer. TAMP looks to provide greater explanatory power in why novices struggle and suggest pedagogy to bridge gaps in learning. This paper will provide an example of this by reinterpreting debugging literature using TAMP as a theoretical guide. Incorporating new view theoretical viewpoints from old studies suggests a “debugging-first” pedagogy can supplement existing methods of teaching programming and perhaps fill some of the mental gaps TAMP suggests hamper novice programmers

    Code Complexity in Introductory Programming Courses

    Get PDF
    Instructors of introductory programming courses would benefit from having a metric for evaluating the sophistication of student code. Since introductory programming courses pack a wide spectrum of topics in a short timeframe, student code changes quickly, raising questions of whether existing software complexity metrics effectively reflect student growth as reflected in their code. We investigate code produced by over 800 students in two different Python-based CS1 courses to determine if frequently used code quality and complexity metrics (e.g., cyclomatic and Halstead complexities) or metrics based on length and syntactic complexity are more effective as a heuristic for gauging students' progress through a course. We conclude that the traditional metrics do not correlate well with time passed in the course. In contrast, metrics based on syntactic complexity and solution size correlate strongly with time in the course, suggesting that they may be more appropriate for evaluating how student code evolves in a course context.Instructors of introductory programming courses would benefit from having a metric for evaluating the sophistication of student code. Since introductory programming courses pack a wide spectrum of topics in a short timeframe, student code changes quickly, raising questions of whether existing software complexity metrics effectively reflect student growth as reflected in their code. We investigate code produced by over 800 students in two different Python-based CS1 courses to determine if frequently used code quality and complexity metrics (e.g., cyclomatic and Halstead complexities) or metrics based on length and syntactic complexity are more effective as a heuristic for gauging students' progress through a course. We conclude that the traditional metrics do not correlate well with time passed in the course. In contrast, metrics based on syntactic complexity and solution size correlate strongly with time in the course, suggesting that they may be more appropriate for evaluating how student code evolves in a course context.Peer reviewe
    • 

    corecore