564 research outputs found

    Analysis of Students' Peer Reviews to Crowdsourced Programming Assignments

    Get PDF
    We have used a tool called CrowdSorcerer that allows students to create programming assignments. The students are given a topic by a teacher, after which the students design a programming assignment: the assignment description, the code template, a model solution and a set of input-output -tests. The created assignments are peer reviewed by other students on the course. We study students' peer reviews to these student-generated assignments, focusing on examining the differences between novice and experienced programmers. We then analyze whether the exercises created by experienced programmers are rated better quality-wise than those created by novices. Additionally, we investigate the differences between novices and experienced programmers as peer reviewers: can novices review assignments as well as experienced programmers?Peer reviewe

    Crowdsourcing Content Creation for SQL Practice

    Get PDF
    Crowdsourcing refers to the act of using the crowd to create content or to collect feedback on some particular tasks or ideas. Within computer science education, crowdsourcing has been used -- for example -- to create rehearsal questions and programming assignments. As a part of their computer science education, students often learn relational databases as well as working with the databases using SQL statements. In this article, we describe a system for practicing SQL statements. The system uses teacher-provided topics and assignments, augmented with crowdsourced assignments and reviews. We study how students use the system, what sort of feedback students provide to the teacher-generated and crowdsourced assignments, and how practice affects the feedback. Our results suggest that students rate assignments highly, and there are only minor differences between assignments generated by students and assignments generated by the instructor.Peer reviewe

    On the Quality of Crowdsourced Programming Assignments

    Get PDF
    Crowdsourcing has been used in computer science education to alleviate the teachers’ workload in creating course content, and as a learning and revision method for students through its use in educational systems. Tools that utilize crowdsourcing can act as a great way for students to further familiarize themselves with the course concepts, all while creating new content for their peers and future course iterations. In this study, student-created programming assignments from the second week of an introductory Java programming course are examined alongside the peer reviews these assignments received. The quality of the assignments and the peer reviews is inspected, for example, through comparing the peer reviews with expert reviews using inter-rater reliability. The purpose of this study is to inspect what kinds of programming assignments novice students create, and whether the same novice students can act as reliable reviewers. While it is not possible to draw definite conclusions from the results of this study due to limitations concerning the usability of the tool, the results seem to indicate that novice students are able to recognise differences in programming assignment quality, especially with sufficient guidance and well thought-out instructions

    Lessons Learned From Four Computing Education Crowdsourcing Systems

    Get PDF
    Crowdsourcing is a general term that describes the practice of many individuals working collectively to achieve a common goal or complete a task, often involving the generation of content. In an educational context, crowdsourcing of learning materials- where students create resources that can be used by other learners- offers several benefits. Students benefit from the act of producing resources as well as from using the resources. Despite benefits, instructors may be hesitant to adopt crowdsourcing for several reasons, such as concerns around the quality of content produced by students and the perceptions students may have of creating resources for their peers. While prior work has explored crowdsourcing concerns within the context of individual tools, lessons that are generalisable across multiple platforms and derived from practical use can provide considerably more robust insights. In this perspective article, we present four crowdsourcing tools that we have developed and used in computing classrooms. From our previous studies and experience, we derive lessons which shed new light on some of the concerns that are typical for instructors looking to adopt such tools. We find that across multiple contexts, students are capable of generating high quality learning content which provides good coverage of key concepts. Although students do appear hesitant to engage with new kinds of activities, various types of incentives have proven effective. Finally, although studies on learning effects have shown mixed results, no negative outcomes have been observed. In light of these lessons, we hope to see a greater uptake and use of crowdsourcing in computing education.Peer reviewe

    Experiences from Teaching Automated Testing with CrowdSorcerer

    Get PDF
    Software testing is an important process when ensuring a program's quality. However, testing has not traditionally been a very substantial part of computer science education. Some attempts to integrate it into the curriculum has been made but best practices still prove to be an open question. This thesis discusses multiple attempts of teaching software testing during the years. It also introduces CrowdSorcerer, a system for gathering programming assignments with tests from students. It has been used in introductory programming courses in University of Helsinki. To study if the students benefit from creating assignments with CrowdSorcerer, we analysed the number of assignments and tests they created and if they correlate with their performance in a testing-related question in the course exam. We also gathered feedback from the students on their experiences from using CrowdSorcerer. Looking at the results, it seems that more research on how to teach testing would be beneficial. Improving CrowdSorcerer would also be a good idea

    Spam elimination and bias correction : ensuring label quality in crowdsourced tasks.

    Get PDF
    Crowdsourcing is proposed as a powerful mechanism for accomplishing large scale tasks via anonymous workers online. It has been demonstrated as an effective and important approach for collecting labeled data in application domains which require human intelligence, such as image labeling, video annotation, natural language processing, etc. Despite the promises, one big challenge still exists in crowdsourcing systems: the difficulty of controlling the quality of crowds. The workers usually have diverse education levels, personal preferences, and motivations, leading to unknown work performance while completing a crowdsourced task. Among them, some are reliable, and some might provide noisy feedback. It is intrinsic to apply worker filtering approach to crowdsourcing applications, which recognizes and tackles noisy workers, in order to obtain high-quality labels. The presented work in this dissertation provides discussions in this area of research, and proposes efficient probabilistic based worker filtering models to distinguish varied types of poor quality workers. Most of the existing work in literature in the field of worker filtering either only concentrates on binary labeling tasks, or fails to separate the low quality workers whose label errors can be corrected from the other spam workers (with label errors which cannot be corrected). As such, we first propose a Spam Removing and De-biasing Framework (SRDF), to deal with the worker filtering procedure in labeling tasks with numerical label scales. The developed framework can detect spam workers and biased workers separately. The biased workers are defined as those who show tendencies of providing higher (or lower) labels than truths, and their errors are able to be corrected. To tackle the biasing problem, an iterative bias detection approach is introduced to recognize the biased workers. The spam filtering algorithm proposes to eliminate three types of spam workers, including random spammers who provide random labels, uniform spammers who give same labels for most of the items, and sloppy workers who offer low accuracy labels. Integrating the spam filtering and bias detection approaches into aggregating algorithms, which infer truths from labels obtained from crowds, can lead to high quality consensus results. The common characteristic of random spammers and uniform spammers is that they provide useless feedback without making efforts for a labeling task. Thus, it is not necessary to distinguish them separately. In addition, the removal of sloppy workers has great impact on the detection of biased workers, with the SRDF framework. To combat these problems, a different way of worker classification is presented in this dissertation. In particular, the biased workers are classified as a subcategory of sloppy workers. Finally, an ITerative Self Correcting - Truth Discovery (ITSC-TD) framework is then proposed, which can reliably recognize biased workers in ordinal labeling tasks, based on a probabilistic based bias detection model. ITSC-TD estimates true labels through applying an optimization based truth discovery method, which minimizes overall label errors by assigning different weights to workers. The typical tasks posted on popular crowdsourcing platforms, such as MTurk, are simple tasks, which are low in complexity, independent, and require little time to complete. Complex tasks, however, in many cases require the crowd workers to possess specialized skills in task domains. As a result, this type of task is more inclined to have the problem of poor quality of feedback from crowds, compared to simple tasks. As such, we propose a multiple views approach, for the purpose of obtaining high quality consensus labels in complex labeling tasks. In this approach, each view is defined as a labeling critique or rubric, which aims to guide the workers to become aware of the desirable work characteristics or goals. Combining the view labels results in the overall estimated labels for each item. The multiple views approach is developed under the hypothesis that workers\u27 performance might differ from one view to another. Varied weights are then assigned to different views for each worker. Additionally, the ITSC-TD framework is integrated into the multiple views model to achieve high quality estimated truths for each view. Next, we propose a Semi-supervised Worker Filtering (SWF) model to eliminate spam workers, who assign random labels for each item. The SWF approach conducts worker filtering with a limited set of gold truths available as priori. Each worker is associated with a spammer score, which is estimated via the developed semi-supervised model, and low quality workers are efficiently detected by comparing the spammer score with a predefined threshold value. The efficiency of all the developed frameworks and models are demonstrated on simulated and real-world data sets. By comparing the proposed frameworks to a set of state-of-art methodologies, such as expectation maximization based aggregating algorithm, GLAD and optimization based truth discovery approach, in the domain of crowdsourcing, up to 28.0% improvement can be obtained for the accuracy of true label estimation

    Social code review tool for programming education

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from PDF version of thesis.Includes bibliographical references (p. 63-65).Caesar is a distributed, social code review tool designed for the specific constraints and goals of a programming course. Caesar is capable of scaling to a large and diverse reviewer population, provides automated tools for increasing reviewer efficiency, and implements a social web interface for reviewing that encourages discussion and participation. Our system is implemented in three loosely-coupled components: a language-specific code preprocessor that partitions code into small pieces, filters out uninteresting ones, runs static analysis, and detects clusters of similar code; an incremental task router that dynamically assigns reviewers to tasks; and a language-agnostic web interface for reviewing code. Our evaluation using actual student code and a user study indicate that Caesar provides a significant improvement over existing code review workflows and interfaces. We also believe that this work contributes a modular framework for code reviewing systems that can be easily extended and improved.by Mason Tang.M.Eng
    • …
    corecore