2,343 research outputs found

    Investigating novice programming mistakes: educator beliefs vs. student data

    Get PDF
    Educators often form opinions on which programming mistakes novices make most often - for example, in Java: "they always confuse equality with assignment", or "they always call methods with the wrong types". These opinions are generally based solely on personal experience. We report a study to determine if programming educators form a consensus about which Java programming mistakes are the most common. We used the Blackbox data set to check whether the educators' opinions matched data from over 100,000 students - and checked whether this agreement was mediated by educators' experience. We found that educators formed only a weak consensus about which mistakes are most frequent, that their rankings bore only a moderate correspondence to the students in the Blackbox data, and that educators' experience had no effect on this level of agreement. These results raise questions about claims educators make regarding which errors students are most likely to commit

    Understanding and Addressing Misconceptions in Introductory Programming: A Data-Driven Approach

    Get PDF
    With the expansion of computer science (CS) education, CS teachers in K-12 schools should be cognizant of student misconceptions and be prepared to help students establish accurate understanding of computer science and programming. This exploratory design-based research (DBR) study implemented a data-driven approach to identify secondary school students’ misconceptions using both their compilation and test errors and provide targeted feedback to promote students’ conceptual change in introductory programming. Research subjects were two groups of high school students enrolled in two sections of a Java-based programming course in a 2017 summer residential program for gifted and talented students. This study consisted of two stages. In the first stage, students of group 1 took the introductory programming class and used an automated learning system, Mulberry, which collected data on student problem-solving attempts. Data analysis was conducted to identify common programming errors students demonstrated in their programs and relevant misconceptions. In the second stage, targeted feedback to address these misconceptions was designed using principles from conceptual change and feedback theories and added to Mulberry. When students of group 2 took the same introductory programming class and solved programming problems in Mulberry, they received the targeted feedback to address their misconceptions. Data analysis was conducted to assess how the feedback affected the evolution of students’ (mis)conceptions. Using students’ erroneous solutions, 55 distinct compilation errors were identified, and 15 of them were categorized as common ones. The 15 common compilation errors accounted for 92% of all compilation errors. Based on the 15 common compilation errors, three underlying student misconceptions were identified, including deficient knowledge of fundamental Java program structure, misunderstandings of Java expressions, and confusion about Java variables. In addition, 10 common test errors were identified based on nine difficult problems. The results showed that 54% of all test errors were related to the difficult problems, and the 10 common test errors accounted for 39% of all test errors of the difficult problems. Four common student misconceptions were identified based on the 10 common test errors, including misunderstandings of Java input, misunderstandings of Java output, confusion about Java operators, and forgetting to consider special cases. Both quantitative and qualitative data analysis were conducted to see whether and how the targeted feedback affected students’ solutions. Quantitative analysis indicated that targeted feedback messages enhanced students’ rates of improving erroneous solutions. Group 2 students showed significantly higher improvement rates in all erroneous solutions and solutions with common errors compared to group 1 students. Within group 2, solutions with targeted feedback messages resulted in significantly higher improvement rates compared to solutions without targeted feedback messages. Results suggest that with targeted feedback messages students were more likely to correct errors in their code. Qualitative analysis of students’ solutions of four selected cases determined that students of group 2, when improving their code, made fewer intermediate incorrect solutions than students in group 1. The targeted feedback messages appear to have helped to promote conceptual change. The results of this study suggest that a data-driven approach to understanding and addressing student misconceptions, which is using student data in automated assessment systems, has the potential to improve students’ learning of programming and may help teachers build better understanding of their students’ common misconceptions and develop their pedagogical content knowledge (PCK). The use of automated assessment systems with misconception identification components may be helpful in pre-college introductory programming courses and so is encouraged as K-12 CS education expands. Researchers and developers of automated assessment systems should develop components that support identifying common student misconceptions using both compilation and non-compilation errors. Future research should continue to investigate the use of targeted feedback in automated assessment systems to address students’ misconceptions and promote conceptual change in computer science education

    College Student Perceptions of MyProgrammingLab and BlueJ in an Introductory Computing Course

    Get PDF
    Students in introductory computing courses face various challenges. Many learning systems are available to support teaching and learning in introductory computing courses. Empirical work examining the use of such learning systems is available, but limited. In this research, we gathered student perceptions of two learning systems MyProgrammingLab and BlueJ. Understanding student perceptions of learning systems and their impact on learning to program is valuable information for both instructors and students. In this analysis, we gathered student perceptions of MyProgrammingLab and BlueJ in three surveys towards the end of a 15-week semester. Although students encountered problems in MyProgrammingLab and BlueJ, more than three quarters of the students perceived MyProgrammingLab and BlueJ to be useful in helping develop their programming skills. Many students agreed that using MyProgrammingLab and BlueJ helped them better understand the course materials

    Teaching Tip: An Example-Based Instructional Method to Develop Students’ Problem-Solving Efficacy in an Introductory Programming Course

    Get PDF
    This paper introduces a teaching process to develop students’ problem-solving and programming efficacy in an introductory computer programming course. The proposed teaching practice provides step-by-step guidelines on using worked-out examples of code to demonstrate the applications of programming concepts. These coding demonstrations explicitly teach the systematic approach and strategies required to develop a programming solution. Each code demonstration is then followed by the instructor assigning similar practice problems to build learners’ awareness of the programming process and problem-solving techniques. Every successful attempt of the practice exercise by a student exemplifies their efficacy in applying the programming process and developing solutions using the instructor’s strategies. Finally, through regular and structured feedback, the instructor gives learners insight into their performance in completing various steps of the programming process. This paper provides guidelines for creating and using code demonstrations, practice exercises, and rubrics for structured feedback in an introductory programming class. An end-of-course survey was employed to compare students’ reported self-efficacy and their actual programming and problem-solving efficacy, based on their completion rates of the practice activities

    37 Million Compilations: Investigating Novice Programming Mistakes in Large-Scale Student Data

    Get PDF
    Previous investigations of student errors have typically focused on samples of hundreds of students at individual institutions. This work uses a year's worth of compilation events from over 250,000 students all over the world, taken from the large Blackbox data set. We analyze the frequency, time-to-fix, and spread of errors among users, showing how these factors inter-relate, in addition to their development over the course of the year. These results can inform the design of courses, textbooks and also tools to target the most frequent (or hardest to fix) errors

    Novice Java Programming Mistakes: Large-Scale Data vs. Educator Beliefs

    Get PDF
    Teaching is the process of conveying knowledge and skills to learners. It involves preventing misunderstandings or correcting misconceptions that learners have acquired. Thus, effective teaching relies on solid knowledge of the discipline, but also a good grasp of where learners are likely to trip up or misunderstand. In programming, there is much opportunity for misunderstanding, and the penalties are harsh: failing to produce the correct syntax for a program, for example, can completely prevent any progress in learning how to program. Because programming is inherently computer-based, we have an opportunity to automatically observe programming behaviour -- more closely even than an educator in the room at the time. By observing students' programming behaviour, and surveying educators, we can ask: do educators have an accurate understanding of the mistakes that students are likely to make? In this study, we combined two years of the Blackbox dataset (with more than 900 thousand users and almost 100 million compilation events) with a survey of 76 educators to investigate which mistakes students make while learning to program Java, and whether the educators could make an accurate estimate of which mistakes were most common. We find that educators' estimates do not agree with one another or the student data, and discuss the implications of these results

    A Perspective Of Automated Programming Error Feedback Approaches In Problem Solving Exercises

    Get PDF
    Programming tools are meant for student to practice programming. Automated programming error feedback will be provided for students to self-construct the knowledge through their own experience. This paper has clustered current approaches in providing automated error programming feedback to the students during problem solving exercises. These include additional syntax error messages, solution template mismatches, test data comparison, assisted agent report and collaborative comment feedback. The study is conducted based on published papers for last two decades. The trends are analyzed to get the overview of latest research contributions towards eliminating programming difficulties among students. The result shows that future direction of automated programming error feedback approaches may combine agent and collaborative feedback approaches towards more interactive, dynamic, end-user oriented and specific goal oriented. Such future direction may help other researchers fill in the gap on new ways of assisting learners to better understand feedback messages provided by automated assessment tool

    An Exploration Of The Effects Of Enhanced Compiler Error Messages For Computer Programming Novices

    Get PDF
    Computer programming is an essential skill that all computing students must master and is increasingly important in many diverse disciplines. It is also difficult to learn. One of the many challenges novice programmers face from the start are notoriously cryptic compiler error messages. These report details on errors made by students and are essential as the primary source of information used to rectify those errors. However these difficult to understand messages are often a barrier to progress and a source of discouragement. A high number of student errors, and in particular a high frequency of repeated errors – when a student makes the same error consecutively – have been shown to be indicators of students who are struggling with learning to program. This instrumental case study research investigates the student experience with, and the effects of, software that has been specifically written to help students overcome their challenges with compiler error messages. This software provides help by enhancing error messages, presenting them in a straightforward, informative manner. Two cohorts of first year computing students at an Irish higher education institution participated over two academic years; a control group in 2014-15 that did not experience enhanced error messages, and an intervention group in 2013-14 that did. This thesis lays out a comprehensive view of the student experience starting with a quantitative analysis of the student errors themselves. It then views the students as groups, revealing interesting differences in error profiles. Following this, some individual student profiles and behaviours are investigated. Finally, the student experience is discovered through their own words and opinions by means of a survey that incorporated closed and open-ended questions. In addition to reductions in errors overall, errors per student, and the key metric of repeated error frequency, the intervention group is shown to behave more cohesively with fewer indications of struggling students. A positive learning experience using the software is reported by the students and the lecturer. These results are of interest to educators who have witnessed students struggle with learning to program, and who are looking to help remove the barrier presented by compiler error messages. This work is important for two reasons. First, the effects of error message enhancement have been debated in the literature – this work provides evidence that there can be positive effects. Second, these results should be generalisable at least in part, to other languages, students and institutions

    Automated Program Analysis for Novice Programmers

    Get PDF
    [EN] This paper describes how to adapt a static code analyzer to provide feedback novice programmers and their teachers. Current analyzers have been built to give feedback to experienced programmers who work on software projects or systems. The type of feedback and the type of analysis of these tools focusses on mistakes that are relevant within that context, and help with debugging software system. When teaching novice programmers this type of advice is often not particularly useful. It would be instead more useful to use these techniques to identify problem in the understanding of students of important programming concepts. This paper first explores in what respect static analyzers support the learning and teaching of programming, and what can be implemented based on existing static analysis technology. It presents an extension of static analyzer PMD to create feedback that is more valuable to novice programmers. To answer the question if these techniques are able to find conceptual mistakes that are characteristic for novice programmers make, we ran it over a number of student projects, and compared these results with publicly available mature software projects.Blok, T.; Fehnker, A. (2017). Automated Program Analysis for Novice Programmers. En Proceedings of the 3rd International Conference on Higher Education Advances. Editorial Universitat Politècnica de València. 1138-1146. https://doi.org/10.4995/HEAD17.2017.5533OCS1138114
    • …
    corecore