86 research outputs found

    ClockIt: Monitoring and Visualizing Student Software Development Profiles

    Get PDF
    Monitoring software development practices can result in improved estimation abilities and increased software quality. A common drawback associated with many monitoring schemes is the manual overhead needed to make the monitoring effective. This overhead results in users abandoning the monitoring scheme shortly after it is adopted or poor quality in the data produced. Alternatives have been introduced that automate part, or all of the monitoring. ClockIt is a fully automated extension for the pedagogical integrated development environment (IDE) BlueJ, and focuses on aspects of the development practices seen in introductory level students. By automatically monitoring introductory student development behavior, instructors and students gain insight about development practices. In addition to the ClockIt extension, Visualization tools are provided to assist students or instructors in exploring the data. Data collected via ClockIt for four semesters confirm previous independent findings. And, new insights about how compilation error frequency changes in introductory students and the relationships between pairs of compilations have been discovered

    Investigating the Behavior of Novice Programmers in a Large Dataset

    Get PDF
    As the technology sector grows, the need for computer programmers is increasing. This has led to efforts to train and hire more teachers, and to make teachers more effective. Previous studies have shown that the strategies a novice programmer employs to solve a homework assignment early in the term can indicate how well they will do on future coursework, and these approaches hold the promise of identifying struggling students automatically, early in the term. The majority of these studies have focused their analysis on relatively small groups of students. Blackbox, a database of program-editing traces collected from the novice-oriented IDE BlueJ, provides a dataset of solutions from almost two million novice programmers to investigate. This research identified and collected information on a group of novice programmers working on the same problem, in addition to proposing a new Programming State Model to guide future analysis

    A Novel Study of the Relation Between Students Navigational Behavior on Blackboard and their Learning Performance in an Undergraduate Networking Course

    Full text link
    This paper provides an overview of students behavior analysis on a learning management system (LMS), Blackboard (Bb) Learn for a core data communications course of the Undergraduate IT program in the Information Sciences and Technology (IST) Department at George Mason University (GMU). This study is an attempt to understand the navigational behavior of students on Blackboard Learn which can be further attributed to the overall performance of the students. In total, 160 undergraduate students participated in the study. Vast amount of students activities data across all four sections of the course were collected. All sections have similar content, assessment design and instruction methods. A correlation analysis between the different assessment methods and various key variables such as total student time, total number of logins and various other factors were performed, to evaluate students engagement on Blackboard Learn. Our findings can help instructors to efficiently identify students strengths or weaknesses and fine-tune their courses for better student engagement and performance

    An exploration of novice compilation behaviour in BlueJ

    Get PDF
    Our research explores the process by which beginning programmers go about writing programs. We have focused our explorations on what we call compilation behaviour: the programming behaviour a student engages in while repeatedly editing and compiling their programs in an attempt to make them syntactically, if not semantically, correct. The students whose behaviour we have observed were engaged in learning to program in an objects-first style using BlueJ, an environment designed for supporting novice programmers just starting out with the Java programming language. The significant results of our work are two-fold. First, we have developed tools for visualising the process by which students write their programs. Using these tools, we can quickly obtain valuable information about their process, and use that information to inform further research regarding their behaviour, or apply it immediately in a classroom context to better support the struggling learner. Second, we have proposed a quantification of novice compilation behavior which we call the error quotient. Using this metric, we can determine how well (or poorly) a student fares with syntax errors while learning to program. This quantity, like our tools for visualisation, provides a powerful indicator for how much or little a student is struggling with the language while programming, and correlates significantly with traditional indicators for academic progress

    A Data-Driven Approach to Compare the Syntactic Difficulty of Programming Languages

    Get PDF
    Educators who teach programming subjects are often wondering “which programming language should I teach first?”. The debate behind this question has a long history and coming up with a definite answer to this question would be farfetched. Nonetheless, several efforts can be identified in the literature wherein pros and cons of mainstream programming languages are examined, analysed, and discussed in view of their potential to facilitate the didactics of programming concepts especially to novice programmers. In line with these efforts, we explore the latter question by comparing the syntactic difficulty of two modern, but fundamentally different, programming languages: Java and Python. To achieve this objective, we introduce a standalone and purely data-driven method which stores the code submissions and clusters the errors occurred under the aid of a custom transition probability matrix. For the evaluation of this model a total of 219,454 submissions, made by 715 first-year undergraduate students, in 259 unique programming exercises were gathered and analysed. The results indicate that Python is an easier-to-grasp programming language and is, therefore, highly recommended as the steppingstone in introductory courses. Besides, the adoption of the described method enables educators to not only identify those students who struggle with coding (syntax-wise) but further paves the pathway for the adoption of personalised and adaptive learning practices

    37 Million Compilations: Investigating Novice Programming Mistakes in Large-Scale Student Data

    Get PDF
    Previous investigations of student errors have typically focused on samples of hundreds of students at individual institutions. This work uses a year's worth of compilation events from over 250,000 students all over the world, taken from the large Blackbox data set. We analyze the frequency, time-to-fix, and spread of errors among users, showing how these factors inter-relate, in addition to their development over the course of the year. These results can inform the design of courses, textbooks and also tools to target the most frequent (or hardest to fix) errors

    An exploration of novice compilation behaviour in BlueJ

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Do Enhanced Compiler Error Messages Help Students? Results Inconclusive.

    Get PDF
    One common frustration students face when first learning to program in a compiled language is the difficulty in interpreting the compiler error messages they receive. Attempts to improve error messages have produced differing results. Two recently published papers showed conflicting results, with one showing measurable change in student behavior, and the other showing no measurable change. We conducted an experiment comparable to these two over the course of several semesters in a CS1 course. This paper presents our results in the context of previous work in this area. We improved the clarity of the compiler error messages the students receive, so that they may more readily understand their mistakes and be able to make effective corrections. Our goal was to help students better understand their syntax mistakes and, as a reasonable measure of our success, we expected to document a decrease in the number of times students made consecutive submissions with the same compilation error. By doing this, we could demonstrate that this enhancement is effective. After collecting and thoroughly analyzing our own experimental data, we found that—despite anecdotal stories, student survey responses, and instructor opinions testifying to the tool’s helpfulness— enhancing compiler error messages shows no measurable benefit to students. Our results validate one of the existing studies and contradict another. We discuss some of the reasons for these results and conclude with projections for future research
    • …
    corecore