578 research outputs found
A Systematic Mapping Study of Code Quality in Education -- with Complete Bibliography
While functionality and correctness of code has traditionally been the main
focus of computing educators, quality aspects of code are getting increasingly
more attention. High-quality code contributes to the maintainability of
software systems, and should therefore be a central aspect of computing
education. We have conducted a systematic mapping study to give a broad
overview of the research conducted in the field of code quality in an
educational context. The study investigates paper characteristics, topics,
research methods, and the targeted programming languages. We found 195
publications (1976-2022) on the topic in multiple databases, which we
systematically coded to answer the research questions. This paper reports on
the results and identifies developments, trends, and new opportunities for
research in the field of code quality in computing education
Critiquing Antipatterns In Novice Code
Students in introductory computer science courses, are learning to program. Indeed, most students perceive that learning to code is the central topic explored in the courses. Students spend an enormous amount of time struggling to learn the syntax and understand semantics of a particular language. Instructors spend a similar amount of time reading student code and explaining the meaning of the cryptic error messages displayed by compilers. Messages provided by compilers are intended to give feedback on the adherence of one’s code to the language specification and conventions. Unfortunately, these message are geared towards experts who have a clear understanding of the language syntax and semantics and a deep model of what comprises a program and how a program is developed. These students are novices who lack fundamental understanding of the structure of a program and have no basic mental model of how a program works. Novices make different kinds of mistakes than experts. Instructors need to spend a lot of time simply assisting novices in using compilers and understanding their output. In addition to mastering the syntax and semantics of their first programming language, novices are exposed to the question of what constitutes good design. Instructors can identify virtuous design choices and articulate areas of improvement. But contact time with students is limited, and waiting for in-person feedback or replies to personal messages can be a critical delay. Novices, still struggling to use the compiler, have not yet developed the sophisticated analytical processes employed by experts and this is reflected in their design choices and the kinds of mistakes they make. When a novice approaches an instructor with a question, the instructor must often provide a balanced critique that assists the student with understanding both the structure and the design aspects of their own code. My research has focused on whether we can identify examples of early programming antipatterns that have arisen from our teaching experience, and describe different ways of detecting them automatically. Novice students may produce code that is close to a correct solution but contains syntactic errors; code critiquers attempt to salvage the promising portions of the students submission and suggest repairs in ways more meaningful than typical compiler error messages. Alternatively, a student misunderstanding may result in well-formed code that passes unit tests yet contains clear design flaws; through additional analysis, code critiquers can detect and flag these flaws. Finally, certain types of antipatterns can be anticipated and flagged by the instructor, based on the context of the course and the programming activity; code critiquers allow for customizable critique triggers and messages. This dissertation presents several key contributions to our understanding of novice misconceptions and their representation, diagnosis and repair using antipatterns. My research focuses on identifying antipatterns and detecting them in novice code, then using this information to provide the student with a meaningful critique of their work. I have developed WebTA, a tool to critique student programs in introductory computer science courses. WebTA is used to teach students test-driven agile development methods through small cycles of teaching, coding integrated with testing, and immediate feedback.Through the use of WebTA in introductory computer science courses since 2014, I have amassed a significant corpus of novice programmer submission data. Lastly, I have compiled a library of antipatterns found in novice code
A Systematic Mapping Study of Code Quality in Education -- with Complete Bibliography
While functionality and correctness of code has traditionally been the main focus of computing educators, quality aspects of code are getting increasingly more attention. High-quality code contributes to the maintainability of software systems, and should therefore be a central aspect of computing education. We have conducted a systematic mapping study to give a broad overview of the research conducted in the field of code quality in an educational context. The study investigates paper characteristics, topics, research methods, and the targeted programming languages. We found 195 publications (1976-2022) on the topic in multiple databases, which we systematically coded to answer the research questions. This paper reports on the results and identifies developments, trends, and new opportunities for research in the field of code quality in computing education
The Example Guru: Suggesting Examples to Novice Programmers in an Artifact-Based Context
Programmers in artifact-based contexts could likely benefit from skills that they do not realize exist. We define artifact-based contexts as contexts where programmers have a goal project, like an application or game, which they must figure out how to accomplish and can change along the way. Artifact-based contexts do not have quantifiable goal states, like the solution to a puzzle or the resolution of a bug in task-based contexts. Currently, programmers in artifact-based contexts have to seek out information, but may be unaware of useful information or choose not to seek out new skills. This is especially problematic for young novice programmers in blocks programming environments. Blocks programming environments often lack even minimal in-context support, such as auto-complete or in-context documentation. Novices programming independently in these blocks-based programming environments often plateau in the programming skills and API methods they use. This work aims to encourage novices in artifact-based programming contexts to explore new API methods and skills. One way to support novices may be with examples, as examples are effective for learning and highly available. In order to better understand how to use examples for supporting novice programmers, I first ran two studies exploring novices\u27 use and focus on example code. I used those results to design a system called the Example Guru. The Example Guru suggests example snippets to novice programmers that contain previously unused API methods or code concepts. Finally, I present an approach for semi-automatically generating content for this type of suggestion system. This approach reduces the amount of expert effort required to create suggestions. This work contains three contributions: 1) a better understanding of difficulties novices have using example code, 2) a system that encourages exploration and use of new programming skills, and 3) an approach for generating content for a suggestion system with less expert effort
Recommended from our members
Automated Style Feedback for Advanced Beginner Java Programmers
FrenchPress is an Eclipse plug-in that partially automates the task of giving students feedback on their Java programs. It is designed not for novices but for students taking their second or third Java course: students who know enough Java to write a working program but lack the judgment to recognize bad code when they see it. FrenchPress does not diagnose compile-time or run-time errors, or logical errors that produce incorrect output. It targets silent flaws, flaws the student is unable to identify for himself because nothing in the programming environment alerts him.
FrenchPress diagnoses flaws characteristic of programmers who have not yet assimilated the object-oriented idiom. Such shortcomings include misuse of the public modifier, fields that should have been local variables, and instance variables that should have been class constants. Other rules address the all too common misunderstanding of the boolean data type. FrenchPress delivers explanatory messages in a vocabulary appropriate for advanced beginners. FrenchPress does not fix the problems it detects; the student must decide whether to change the program.
The plug-in has been tested by undergraduates in the UMass data structures and algorithms course, the target audience for FrenchPress diagnostics. A pilot study took place during winter break 2013-2014 and a preliminary classroom trial in Spring 2014. This dissertation reports results from the final classroom trial covering four programming assignments in Fall 2014. Among students whose code triggered one or more of the diagnostic rules, the percentage who modified their program in response to FrenchPress feedback varied from a high of 59% on the first project to a low of 23% on the second and fourth projects. User satisfaction surveys indicate that among students who said FrenchPress gave them suggestions for improvement, the percentage who found the feedback helpful bounced from around 55% on the first and third assignments to 32-40% on the second and fourth assignments. The lower acceptance on the second and fourth projects corresponds to a higher incidence of false positives and other confusing feedback messages. Nevertheless, the percentage of survey respondents who said they were satisfied with FrenchPress performance ranged from 56% to 66% on all four assignments
Exploring Automated Code Evaluation Systems and Resources for Code Analysis: A Comprehensive Survey
The automated code evaluation system (AES) is mainly designed to reliably
assess user-submitted code. Due to their extensive range of applications and
the accumulation of valuable resources, AESs are becoming increasingly popular.
Research on the application of AES and their real-world resource exploration
for diverse coding tasks is still lacking. In this study, we conducted a
comprehensive survey on AESs and their resources. This survey explores the
application areas of AESs, available resources, and resource utilization for
coding tasks. AESs are categorized into programming contests, programming
learning and education, recruitment, online compilers, and additional modules,
depending on their application. We explore the available datasets and other
resources of these systems for research, analysis, and coding tasks. Moreover,
we provide an overview of machine learning-driven coding tasks, such as bug
detection, code review, comprehension, refactoring, search, representation, and
repair. These tasks are performed using real-life datasets. In addition, we
briefly discuss the Aizu Online Judge platform as a real example of an AES from
the perspectives of system design (hardware and software), operation
(competition and education), and research. This is due to the scalability of
the AOJ platform (programming education, competitions, and practice), open
internal features (hardware and software), attention from the research
community, open source data (e.g., solution codes and submission documents),
and transparency. We also analyze the overall performance of this system and
the perceived challenges over the years
- …