85,512 research outputs found

    Customizable and scalable automated assessment of C/C++ programming assignments

    Get PDF
    The correction of exercises in programming courses is a laborious task that has traditionally been performed in a manual way. This situation, in turn, delays the access by students to feedback that can contribute significantly to their training as future professionals. Over the years, several approaches have been proposed to automate the assessment of students' programs. Static analysis is a known technique that can partially simulate the process of manual code review performed by lecturers. As such, it is a plausible option to assess whether students' solutions meet the requirements imposed on the assignments. However, implementing a personalized analysis beyond the rules included in existing tools may be a complex task for the lecturer without a mechanism that guides the work. In this paper, we present a method to provide automated and specific feedback to immediately inform students about their mistakes in programming courses. To that end, we developed the CAC++ library, which enables constructing tailored static analysis programs for C/C++ practices. The library allows for great flexibility and personalization of verifications to adjust them to each particular task, overcoming the limitations of most of the existing assessment tools. Our approach to providing specific feedback has been evaluated for a period of three academic years in a course related to object-oriented programming. The library allowed lecturers to reduce the size of the static analysis programs developed for this course. During this period, the academic results improved and undergraduates positively valued the aid offered when undertaking the implementation of assignments.Universidad de Cádiz, Grant/Award Numbers: sol-201500054192-tra, sol-201600064680-tra; Ministerio de Ciencia, Innovación y Universidades, Grant/Award Number: RTI2018-093608-B-C33; European Regional Development Fun

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    Teaching Reproducibility to First Year College Students: Reflections From an Introductory Data Science Course

    Get PDF
    Access the online Pressbooks version of this article here. Modern technology threatens traditional modes of classroom assessment by providing students with automated ways to write essays and take exams. At the same time, modern technology continues to expand the accessibility of computational tools that promise to increase the potential scope and quality of class projects. This paper presents a case study where students are asked to complete a “reproducible” final project in an introductory data science course using the R programming language. A reproducible project is one where an instructor can easily regenerate the results and conclusions from the submitted materials. Experiences in two small sections of this introductory class suggest that reproducible projects are feasible to implement with only a little increase in assessment difficulty. The sample assignment presented in this paper, along with some proposed adaptations for non-data science classes, provide a pattern for directly assessing a student’s analysis, rather than just the final results

    Through a Glass Darkly: Van Orden, McCreary and the Dangers of Transparency in Establishment Clause Jurisprudence

    Get PDF
    This thesis primarily reports on an action research project that has been conducted on a course in theoretical computer science (TCS). The course is called Algorithms, data structures, and complexity (ADC) and is given at KTH Royal Institute of Technology in Stockholm, Sweden. The ADC course is an introduction to TCS, but resembles and succeeds courses introducing programming, system development best practices, problem solving, proving, and logic. Requiring the completion of four programming projects, the course can easily be perceived as a programming course by the students. Most previous research in computer science education has been on programming and introductory courses. The focus of the thesis work has been to understand what subject matter is particularly difficult to students. In three action research cycles, the course has been studied and improved to alleviate the discovered difficulties. We also discuss how the course design may color students’ perceptions of what TCS is. Most of the results are descriptive. Additionally, automated assessment has been introduced in the ADC course as well as in introductory courses for non-CS majors. Automated assessment is appreciated by the students and is directing their attention to the importance of program correctness. A drawback is that the exercises in their current form are not likely to encourage students to take responsibility for program correctness. The most difficult tasks of the course are related to proving correctness, solving complex dynamic programming problems, and to reductions. A certain confusion regarding the epistemology, tools and discourse of the ADC course and of TCS in general can be glimpsed in the way difficulties manifest themselves. Possible consequences of viewing the highly mathematical problems and tools of ADC in more practical, programming, perspective, are discussed. It is likely that teachers could explicitly address more of the nature and discourse of TCS in order to reduce confusion among the students, for instance regarding the use of such words and constructs as “problem”, “verify a solution”, and “proof sketch”. One of the tools used to study difficulties was self-efficacy surveys. No correlation was found between the self-efficacy beliefs and the graded performance on the course. Further investigation of this is beyond the scope of this thesis, but may be done with tasks corresponding more closely and exclusively to each self-efficacy item. Didactics is an additional way for a professional to understand his or her subject. Didactics is concerned with the teaching and learning of something, and hence sheds light on that “something” from an angle that sometimes is not reflected on by its professionals. Reflecting on didactical aspects of TCS can enrichen the understanding of the subject itself, which is one goal with this work.QC 20140929</p

    Intelligent and adaptive tutoring for active learning and training environments

    Get PDF
    Active learning facilitated through interactive and adaptive learning environments differs substantially from traditional instructor-oriented, classroom-based teaching. We present a Web-based e-learning environment that integrates knowledge learning and skills training. How these tools are used most effectively is still an open question. We propose knowledge-level interaction and adaptive feedback and guidance as central features. We discuss these features and evaluate the effectiveness of this Web-based environment, focusing on different aspects of learning behaviour and tool usage. Motivation, acceptance of the approach, learning organisation and actual tool usage are aspects of behaviour that require different evaluation techniques to be used

    Software Verification and Graph Similarity for Automated Evaluation of Students' Assignments

    Get PDF
    In this paper we promote introducing software verification and control flow graph similarity measurement in automated evaluation of students' programs. We present a new grading framework that merges results obtained by combination of these two approaches with results obtained by automated testing, leading to improved quality and precision of automated grading. These two approaches are also useful in providing a comprehensible feedback that can help students to improve the quality of their programs We also present our corresponding tools that are publicly available and open source. The tools are based on LLVM low-level intermediate code representation, so they could be applied to a number of programming languages. Experimental evaluation of the proposed grading framework is performed on a corpus of university students' programs written in programming language C. Results of the experiments show that automatically generated grades are highly correlated with manually determined grades suggesting that the presented tools can find real-world applications in studying and grading

    The BOSS system for on-line submission and assessment of computing assignments

    Get PDF
    Practical computing courses which involve significant amounts of programming continue to suffer from increasing student numbers. This makes their delivery and management more difficult to achieve effectively with the available resources. One solution to this problem is to develop methods for automating the submission and testing of student programs to support the marking effort and to enable the division of marking tasks among several individuals while ensuring consistency and rigour throughout. We have developed such methods in our system, called BOSS, and have successfully deployed different versions of it on several courses over a number of years. Here, we describe the original system and its recent enhancements, and discuss the benefits it has provided us with, both in terms of administration and in improving the learning process

    Evaluation of a tool for Java structural specification checking

    Get PDF
    Although a number of tools for evaluating Java code functionality and style exist, little work has been done in a distance learning context on automated marking of Java programs with respect to structural specifications. Such automated checks support human markers in assessing students’ work and evaluating their own marking; online automated marking; students checking code before submitting it for marking; and question setters evaluating the completeness of questions set. This project developed and evaluated a prototype tool that performs an automated check of a Java program’s correctness with respect to a structural specification. Questionnaires and interviews were used to gather feedback on the usefulness of the tool as a marking aid to humans, and on its potential usefulness to students for self-assessment when working on their assignments. Markers were asked to compare the usefulness of structural specification testing as compared to other kinds of support, including syntax error assistance, style checking and functionality testing. Initial results suggest that most markers using the structural specification checking tool found it to be useful, and some reported that it increased their accuracy in marking. Reasons for not using the tool included lack of time and the simplicity of the assignment it was trialled on. Some reservations were expressed about reliance on tools for assessment, both for markers and for students. The need for advice on incorporating tools in marking workflow is suggested
    corecore