5 research outputs found

    On Novices\u27 Interaction with Compiler Error Messages: A Human Factors Approach

    Get PDF
    The difficulty in understanding compiler error messages can be a major impediment to novice student learning. To alleviate this issue, multiple researchers have run experiments enhancing compiler error messages in automated assessment tools for programming assignments. The conclusions reached by these published experiments appear to be conflicting. We examine these experiments and propose five potential reasons for the inconsistent conclusions concerning enhanced compiler error messages: (1) students do not read them, (2) researchers are measuring the wrong thing, (3) the effects are hard to measure, (4) the messages are not properly designed, (5) the messages are properly designed, but students do not understand them in context due to increased cognitive load. We constructed mixed-methods experiments designed to address reasons 1 and 5 with a specific automated assessment tool, Athene, that previously reported inconclusive results. Testing student comprehension of the enhanced compiler error messages outside the context of an automated assessment tool demonstrated their effectiveness over standard compiler error messages. Quantitative results from a 60 minute one-on-one think-aloud study with 31 students did not show substantial increase in student learning outcomes over the control. However, qualitative results from the one-on-one thinkaloud study indicated that most students are reading the enhanced compiler error messages and generally make effective changes after encountering them

    An Exploration Of The Effects Of Enhanced Compiler Error Messages For Computer Programming Novices

    Get PDF
    Computer programming is an essential skill that all computing students must master and is increasingly important in many diverse disciplines. It is also difficult to learn. One of the many challenges novice programmers face from the start are notoriously cryptic compiler error messages. These report details on errors made by students and are essential as the primary source of information used to rectify those errors. However these difficult to understand messages are often a barrier to progress and a source of discouragement. A high number of student errors, and in particular a high frequency of repeated errors – when a student makes the same error consecutively – have been shown to be indicators of students who are struggling with learning to program. This instrumental case study research investigates the student experience with, and the effects of, software that has been specifically written to help students overcome their challenges with compiler error messages. This software provides help by enhancing error messages, presenting them in a straightforward, informative manner. Two cohorts of first year computing students at an Irish higher education institution participated over two academic years; a control group in 2014-15 that did not experience enhanced error messages, and an intervention group in 2013-14 that did. This thesis lays out a comprehensive view of the student experience starting with a quantitative analysis of the student errors themselves. It then views the students as groups, revealing interesting differences in error profiles. Following this, some individual student profiles and behaviours are investigated. Finally, the student experience is discovered through their own words and opinions by means of a survey that incorporated closed and open-ended questions. In addition to reductions in errors overall, errors per student, and the key metric of repeated error frequency, the intervention group is shown to behave more cohesively with fewer indications of struggling students. A positive learning experience using the software is reported by the students and the lecturer. These results are of interest to educators who have witnessed students struggle with learning to program, and who are looking to help remove the barrier presented by compiler error messages. This work is important for two reasons. First, the effects of error message enhancement have been debated in the literature – this work provides evidence that there can be positive effects. Second, these results should be generalisable at least in part, to other languages, students and institutions

    Toward Productivity Improvements in Programming Languages Through Behavioral Analytics

    Full text link
    Computer science knowledge and skills have become foundational for success in virtually every professional field. As such, productivity in programming and computer science education is of paramount economic and strategic importance for innovation, employment and economic growth. Much of the research around productivity and computer science education has centered around improving notoriously difficult compiler error messages, with a noted surge in new studies in the last decade. In developing an original research plan for this area, this dissertation begins with an examination of the Case for New Instrumentation, draw- ing inspiration from automated data mining innovations and corporate marketing techniques in behavioral analytics as a model for understanding and prediction of human behavior. This paper then develops and explores techniques for automated measurement of programmer behavior based on token level lexical analysis of computer code. The techniques are applied in two empirical studies on parallel programming tasks with 88 and 91 student participants from the University of Nevada, Las Vegas as well as 108,110 programs from a database code repository. In the first study, through a re-analysis of previously captured data, the token accuracy mapping technique provided direct insight into the root cause for observed performance differences comparing thread-based vs. process-oriented parallel programming paradigms. In the second study com- paring two approaches to GPU programming at different levels of abstraction, we found that students who completed programming tasks in the CUDA paradigm (considered a lower level abstraction) performed at least equal to or better than students using the Thrust library (a higher level of abstraction) across four different abstraction tests. The code repository of programs with compiler errors was gathered from an online programming interface on curriculum pages available in the Quorum language (quorumlanguage.com) for Code.org’s Hour of Code, Quorum’s Common Core-mapped curriculum, activities from Girls Who Code and curriculum for Skynet Junior Scholars for a National Science Foundation funded grant entitled Inno- vators Developing Accessible Tools for Astronomy (IDATA). A key contribution of this research project is the development of a novel approach to compiler error categorization and hint generation based on token patterns called the Token Signature Technique. Token Signature analysis occurs as a post-processing step after a compilation pass with an ANTLR LL* parser triggers and categorizes an error. In this project, we use this technique to i.) further categorize and measure the root causes of the most common compiler errors in the Quorum database and then ii.) serve as an analysis tool for the development of a rules engine for enhancing compiler errors and providing live hint suggestions to programmers. The observed error patterns both in the overall error code categories in the Quorum database and in the specific token signatures within each error code category show error concentration patterns similar to other compiler error studies of the Java and Python programming languages, suggesting a potentially high impact of automated error messages and hints based on this technique. The automated nature of token signature analysis also lends itself to future development with sophisticated data mining technologies in the areas of machine learning, search, artificial intelligence, databases and statistics

    Understanding people through the aggregation of their digital footprints

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 160-172).Every day, millions of people encounter strangers online. We read their medical advice, buy their products, and ask them out on dates. Yet our views of them are very limited; we see individual communication acts rather than the person(s) as a whole. This thesis contends that socially-focused machine learning and visualization of archived digital footprints can improve the capacity of social media to help form impressions of online strangers. Four original designs are presented that each examine the social fabric of a different existing online world. The designs address unique perspectives on the problem of and opportunities offered by online impression formation. The first work, Is Britney Spears Span?, examines a way of prototyping strangers on first contact by modeling their past behaviors across a social network. Landscape of Words identifies cultural and topical trends in large online publics. Personas is a data portrait that characterizes individuals by collating heterogenous textual artifacts. The final design, Defuse, navigates and visualizes virtual crowds using metrics grounded in sociology. A reflection on these experimental endeavors is also presented, including a formalization of the problem and considerations for future research. A meta-critique by a panel of domain experts completes the discussion.by Aaron Robert Zinman.Ph.D
    corecore