1,101 research outputs found

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    Understanding the Effects of Using Parsons Problems to Scaffold Code Writing for Students with Varying CS Self-Efficacy Levels

    Full text link
    Introductory programming courses aim to teach students to write code independently. However, transitioning from studying worked examples to generating their own code is often difficult and frustrating for students, especially those with lower CS self-efficacy in general. Therefore, we investigated the impact of using Parsons problems as a code-writing scaffold for students with varying levels of CS self-efficacy. Parsons problems are programming tasks where students arrange mixed-up code blocks in the correct order. We conducted a between-subjects study with undergraduate students (N=89) on a topic where students have limited code-writing expertise. Students were randomly assigned to one of two conditions. Students in one condition practiced writing code without any scaffolding, while students in the other condition were provided with scaffolding in the form of an equivalent Parsons problem. We found that, for students with low CS self-efficacy levels, those who received scaffolding achieved significantly higher practice performance and in-practice problem-solving efficiency compared to those without any scaffolding. Furthermore, when given Parsons problems as scaffolding during practice, students with lower CS self-efficacy were more likely to solve them. In addition, students with higher pre-practice knowledge on the topic were more likely to effectively use the Parsons scaffolding. This study provides evidence for the benefits of using Parsons problems to scaffold students' write-code activities. It also has implications for optimizing the Parsons scaffolding experience for students, including providing personalized and adaptive Parsons problems based on the student's current problem-solving status.Comment: Peer-Reviewed, Accepted for publication in the proceedings of the 2023 ACM Koli Calling International Conference on Computing Education Researc

    Emergence of computing education as a research discipline

    Get PDF
    This thesis investigates the changing nature and status of computing education research (CER) over a number of years, specifically addressing the question of whether computing education can legitimately be considered a research discipline. The principal approach to addressing this question is an examination of the published literature in computing education conferences and journals. A classification system was devised for this literature, one goal of the system being to clearly identify some publications as research – once a suitable definition of research was established. When the system is applied to a corpus of publications, it becomes possible to determine the proportion of those publications that are classified as research, and thence to detect trends over time and similarities and differences between publication venues. The classification system has been applied to all of the papers over several years in a number of major computing education conferences and journals. Much of the classification was done by the author alone, and the remainder by a team that he formed in order to assess the inter-rater reliability of the classification system. This classification work led to two subsequent projects, led by Associate Professor Judy Sheard and Professor Lauri Malmi, that devised and applied further classification systems to examine the research approaches and methods used in the work reported in computing education publications. Classification of nearly 2000 publications over ranges of 3-10 years uncovers both strong similarities and distinct differences between publication venues. It also establishes clear evidence of a substantial growth in the proportion of research papers over the years in question. These findings are considered in the light of published perspectives on what constitutes a discipline of research, and lead to a confident assertion that computing education can now rightly be considered a discipline of research

    Second level computer science: The Irish K-12 journey begins

    Get PDF
    This paper initially describes the introduction of a new computer science subject for the Irish leaving certificate course. This is comparable to US high school exit exams (AP computer science principals) or the UK A level computer science. In doing so the authors wish to raise international awareness of the new subject’s structure and content. Second, this paper presents the current work of the authors, consisting of early initiatives to try and give the new subject the highest chances of success. The initiatives consist of two facets: The first is the delivery of two-hour computing camps at second level schools (to address stereotypes and provide insight on what computer science really is), which was delivered to 2,943 students, in 95 schools between September 2017 and June 2018. Second, the authors followed this with teacher continual professional development (CPD) sessions, totalling 22, to just over 500 teachers. Early findings are presented, showing potentially concerning trends for gender diversity and CPD development. A call is then raised, to the international computer science education community for wisdom and suggestions that the community may have developed from prior experience. This is to obtain feedback and recommendations for the new subject and the authors’ current initiatives, to address early concerns and help develop the initiatives further

    Semi-automatic assessment of unrestrained Java code: a Library, a DSL, and a workbench to assess exams and exercises

    Full text link
    © ACM 2015. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in http://dx.doi.org/10.1145/2729094.2742615Automated marking of multiple-choice exams is of great interest in university courses with a large number of students. For this reason, it has been systematically implanted in almost all universities. Automatic assessment of source code is however less extended. There are several reasons for that. One reason is that almost all existing systems are based on output comparison with a gold standard. If the output is the expected, the code is correct. Otherwise, it is reported as wrong, even if there is only one typo in the code. Moreover, why it is wrong remains a mystery. In general, assessment tools treat the code as a black box, and they only assess the externally observable behavior. In this work we introduce a new code assessment method that also verifies properties of the code, thus allowing to mark the code even if it is only partially correct. We also report about the use of this system in a real university context, showing that the system automatically assesses around 50% of the work.This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de EconomĂ­ay Competitividad (SecretarĂ­a de Estado de InvestigaciĂłn, Desarrollo e InnovaciĂłn) under grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under grant PROMETEOII2015/013. David Insa was partially supported by the Spanish Ministerio de EducaciĂłn under FPU grant AP2010-4415.Insa Cabrera, D.; Silva, J. (2015). Semi-automatic assessment of unrestrained Java code: a Library, a DSL, and a workbench to assess exams and exercises. ACM. https://doi.org/10.1145/2729094.2742615SK. A Rahman and M. Jan Nordin. A review on the static analysis approach in the automated programming assessment systems. In National Conference on Programming 07, 2007.K. Ala-Mutka. A survey of automated assessment approaches for programming assignments. In Computer Science Education, volume 15, pages 83--102, 2005.C. Beierle, M. Kula, and M. Widera. Automatic analysis of programming assignments. In Proc. der 1. E-Learning Fachtagung Informatik (DeLFI '03), volume P-37, pages 144--153, 2003.J. Biggs and C. Tang. Teaching for Quality Learning at University : What the Student Does (3rd Edition). In Open University Press, 2007.P. Denny, A. Luxton-Reilly, E. Tempero, and J. Hendrickx. CodeWrite: Supporting student-driven practice of java. In Proceedings of the 42nd ACM technical symposium on Computer science education, pages 09--12, 2011.R. Hendriks. Automatic exam correction. 2012.P. Ihantola, T. Ahoniemi, V. Karavirta, and O. Seppala. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research, pages 86--93, 2010.H. Kitaya and U. Inoue. An online automated scoring system for Java programming assignments. In International Journal of Information and Education Technology, volume 6, pages 275--279, 2014.M.-J. Laakso, T. Salakoski, A. Korhonen, and L. Malmi. Automatic assessment of exercises for algorithms and data structures - a case study with TRAKLA2. In Proceedings of Kolin Kolistelut/Koli Calling - Fourth Finnish/Baltic Sea Conference on Computer Science Education, pages 28--36, 2004.Y. Liang, Q. Liu, J. Xu, and D. Wang. The recent development of automated programming assessment. In Computational Intelligence and Software Engineering, pages 1--5, 2009.K. A. Naudé, J. H. Greyling, and D. Vogts. Marking student programs using graph similarity. In Computers & Education, volume 54, pages 545--561, 2010.A. Pears, S. Seidman, C. Eney, P. Kinnunen, and L. Malmi. Constructing a core literature for computing education research. In SIGCSE Bulletin, volume 37, pages 152--161, 2005.F. Prados, I. Boada, J. Soler, and J. Poch. Automatic generation and correction of technical exercices. In International Conference on Engineering and Computer Education (ICECE 2005), 2005.M. Supic, K. Brkic, T. Hrkac, Z. Mihajlovic, and Z. Kalafatic. Automatic recognition of handwritten corrections for multiple-choice exam answer sheets. In Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 1136--1141, 2014.S. Tung, T. Lin, and Y. Lin. An exercise management system for teaching programming. In Journal of Software, 2013.T. Wang, X. Su, Y. Wang, and P. Ma. Semantic similarity-based grading of student programs. In Information and Software Technology, volume 49, pages 99--107, 2007

    Using theory to inform capacity-building: Bootstrapping communities of practice in computer science education research

    Get PDF
    In this paper, we describe our efforts in the deliberate creation of a community of practice of researchers in computer science education (CSEd). We understand community of practice in the sense in which Wenger describes it, whereby the community is characterized by mutual engagement in a joint enterprise that gives rise to a shared repertoire of knowledge, artefacts, and practices. We first identify CSEd as a research field in which no shared paradigm exists, and then we describe the Bootstrapping project, its metaphor, structure, rationale, and delivery, as designed to create a community of practice of CSEd researchers. Features of other projects are also outlined that have similar aims of capacity building in disciplinary-specific pedagogic enquiry. A theoretically derived framework for evaluating the success of endeavours of this type is then presented, and we report the results from an empirical study. We conclude with four open questions for our project and others like it: Where is the locus of a community of practice? Who are the core members? Do capacity-building models transfer to other disciplines? Can our theoretically motivated measures of success apply to other projects of the same nature

    Valuing computer science education research?

    Get PDF
    This paper critically enquires into the value systems which rule the activities of teaching and research. This critique is intended to demonstrate the application of critical enquiry in Computer Science Education Research and therefore uses critical theory as a method of analysis.A framework of Research as a Discourse is applied to explore how the notions of research as opposed to teaching are presented, and how discipline and research communities are sustained. The concept of a discourse, based upon the work of Foucault, enables critical insight into the processes which regulate forms of thought. This paper positions the field of Computer Science Education Research, as an illustrative case, within the broader discourse of Research, and argues that Computer Science Education Researchers and educators need to understand and engage in this discourse and shape it to their own ends

    Analysis of Students' Peer Reviews to Crowdsourced Programming Assignments

    Get PDF
    We have used a tool called CrowdSorcerer that allows students to create programming assignments. The students are given a topic by a teacher, after which the students design a programming assignment: the assignment description, the code template, a model solution and a set of input-output -tests. The created assignments are peer reviewed by other students on the course. We study students' peer reviews to these student-generated assignments, focusing on examining the differences between novice and experienced programmers. We then analyze whether the exercises created by experienced programmers are rated better quality-wise than those created by novices. Additionally, we investigate the differences between novices and experienced programmers as peer reviewers: can novices review assignments as well as experienced programmers?Peer reviewe
    • …
    corecore