1,372 research outputs found

    Crowdsourcing Content Creation for SQL Practice

    Get PDF
    Crowdsourcing refers to the act of using the crowd to create content or to collect feedback on some particular tasks or ideas. Within computer science education, crowdsourcing has been used -- for example -- to create rehearsal questions and programming assignments. As a part of their computer science education, students often learn relational databases as well as working with the databases using SQL statements. In this article, we describe a system for practicing SQL statements. The system uses teacher-provided topics and assignments, augmented with crowdsourced assignments and reviews. We study how students use the system, what sort of feedback students provide to the teacher-generated and crowdsourced assignments, and how practice affects the feedback. Our results suggest that students rate assignments highly, and there are only minor differences between assignments generated by students and assignments generated by the instructor.Peer reviewe

    Lessons Learned From Four Computing Education Crowdsourcing Systems

    Get PDF
    Crowdsourcing is a general term that describes the practice of many individuals working collectively to achieve a common goal or complete a task, often involving the generation of content. In an educational context, crowdsourcing of learning materials- where students create resources that can be used by other learners- offers several benefits. Students benefit from the act of producing resources as well as from using the resources. Despite benefits, instructors may be hesitant to adopt crowdsourcing for several reasons, such as concerns around the quality of content produced by students and the perceptions students may have of creating resources for their peers. While prior work has explored crowdsourcing concerns within the context of individual tools, lessons that are generalisable across multiple platforms and derived from practical use can provide considerably more robust insights. In this perspective article, we present four crowdsourcing tools that we have developed and used in computing classrooms. From our previous studies and experience, we derive lessons which shed new light on some of the concerns that are typical for instructors looking to adopt such tools. We find that across multiple contexts, students are capable of generating high quality learning content which provides good coverage of key concepts. Although students do appear hesitant to engage with new kinds of activities, various types of incentives have proven effective. Finally, although studies on learning effects have shown mixed results, no negative outcomes have been observed. In light of these lessons, we hope to see a greater uptake and use of crowdsourcing in computing education.Peer reviewe

    Experiences from Learnersourcing SQL Exercises : Do They Cover Course Topics and Do Students Use Them?

    Get PDF
    Publisher Copyright: © 2023 Copyright held by the owner/author(s).Learnersourcing is an emerging phenomenon in computing education research and practice. In learnersourcing, a crowd of students participates in the creation of course resources such as exercises, written materials, educational videos, and so on. In computing education research, learnersourcing has been studied especially for the creation of multiple-choice questions and programming exercises, where prior work has suggested that learnersourcing can have multiple benefits for teachers and students alike. One result in prior studies is that when students create learnersourced content, the created content covers much of the learning objectives of the course. The present work expands on this stream of work by studying the use of a learnersourcing system in the context of teaching SQL. We study to what extent learnersourced SQL exercises cover course topics, and to what extent students complete learnersourced exercises. Our results continue the parade of previous learnersourcing studies, empirically demonstrating that learnersourced content covers instructor-specified course topics and that students indeed actively work on the learnersourced exercises. We discuss the impact of these results on teaching with learnersourcing, highlight possible explanations for our observations, and outline directions for future research on learnersourcing.Peer reviewe

    FEMwiki: crowdsourcing semantic taxonomy and wiki input to domain experts while keeping editorial control: Mission Possible!

    Get PDF
    Highly specialized professional communities of practice (CoP) inevitably need to operate across geographically dispersed area - members frequently need to interact and share professional content. Crowdsourcing using wiki platforms provides a novel way for a professional community to share ideas and collaborate on content creation, curation, maintenance and sharing. This is the aim of the Field Epidemiological Manual wiki (FEMwiki) project enabling online collaborative content sharing and interaction for field epidemiologists around a growing training wiki resource. However, while user contributions are the driving force for content creation, any medical information resource needs to keep editorial control and quality assurance. This requirement is typically in conflict with community-driven Web 2.0 content creation. However, to maximize the opportunities for the network of epidemiologists actively editing the wiki content while keeping quality and editorial control, a novel structure was developed to encourage crowdsourcing – a support for dual versioning for each wiki page enabling maintenance of expertreviewed pages in parallel with user-updated versions, and a clear navigation between the related versions. Secondly, the training wiki content needs to be organized in a semantically-enhanced taxonomical navigation structure enabling domain experts to find information on a growing site easily. This also provides an ideal opportunity for crowdsourcing. We developed a user-editable collaborative interface crowdsourcing the taxonomy live maintenance to the community of field epidemiologists by embedding the taxonomy in a training wiki platform and generating the semantic navigation hierarchy on the fly. Launched in 2010, FEMwiki is a real world service supporting field epidemiologists in Europe and worldwide. The crowdsourcing success was evaluated by assessing the number and type of changes made by the professional network of epidemiologists over several months and demonstrated that crowdsourcing encourages user to edit existing and create new content and also leads to expansion of the domain taxonomy

    Mind the Gap: From Desktop to App

    Get PDF
    In this article we present a new mobile game, edugames4all MicrobeQuest!, that covers core learning objectives from the European curriculum on microbe transmission, food and hand hygiene, and responsible antibiotic use. The game is aimed at 9 to 12 year olds and it is based on the desktop version of the edugames4all platform games. We discuss the challenges and lessons learned transitioning from a desktop based game to a mobile app. We also present the seamless evaluation obtained by integrating the assessment of educa- tional impact of the game into the game mechanics

    Salience and Market-aware Skill Extraction for Job Targeting

    Full text link
    At LinkedIn, we want to create economic opportunity for everyone in the global workforce. To make this happen, LinkedIn offers a reactive Job Search system, and a proactive Jobs You May Be Interested In (JYMBII) system to match the best candidates with their dream jobs. One of the most challenging tasks for developing these systems is to properly extract important skill entities from job postings and then target members with matched attributes. In this work, we show that the commonly used text-based \emph{salience and market-agnostic} skill extraction approach is sub-optimal because it only considers skill mention and ignores the salient level of a skill and its market dynamics, i.e., the market supply and demand influence on the importance of skills. To address the above drawbacks, we present \model, our deployed \emph{salience and market-aware} skill extraction system. The proposed \model ~shows promising results in improving the online performance of job recommendation (JYMBII) (+1.92%+1.92\% job apply) and skill suggestions for job posters (−37%-37\% suggestion rejection rate). Lastly, we present case studies to show interesting insights that contrast traditional skill recognition method and the proposed \model~from occupation, industry, country, and individual skill levels. Based on the above promising results, we deployed the \model ~online to extract job targeting skills for all 2020M job postings served at LinkedIn.Comment: 9 pages, to appear in KDD202

    Canary: Extracting Requirements-Related Information from Online Discussions

    Get PDF
    Online discussions about software applications generate a large amount of requirements-related information. This information can potentially be usefully applied in requirements engineering; however currently, there are few systematic approaches for extracting such information. To address this gap, we propose Canary, an approach for extracting and querying requirements-related information in online discussions. The highlight of our approach is a high-level query language that combines aspects of both requirements and discussion in online forums. We give the semantics of the query language in terms of relational databases and SQL. We demonstrate the usefulness of the language using examples on real data extracted from online discussions. Our approach relies on human annotations of online discussions. We highlight the subtleties involved in interpreting the content in online discussions and the assumptions and choices we made to effectively address them. We demonstrate the feasibility of generating high-quality annotations by obtaining them from lay Amazon Mechanical Turk users

    Employing Crowdsourcing for Enriching a Music Knowledge Base in Higher Education

    Full text link
    This paper describes the methodology followed and the lessons learned from employing crowdsourcing techniques as part of a homework assignment involving higher education students of computer science. Making use of a platform that supports crowdsourcing in the cultural heritage domain students were solicited to enrich the metadata associated with a selection of music tracks. The results of the campaign were further analyzed and exploited by students through the use of semantic web technologies. In total, 98 students participated in the campaign, contributing more than 6400 annotations concerning 854 tracks. The process also led to the creation of an openly available annotated dataset, which can be useful for machine learning models for music tagging. The campaign's results and the comments gathered through an online survey enable us to draw some useful insights about the benefits and challenges of integrating crowdsourcing into computer science curricula and how this can enhance students' engagement in the learning process.Comment: To be published in The 4th International Conference on Artificial Intelligence in Education Technology (AIET 2023), Berlin, Germany, 31 June-2 July 2023. For The GitHub code for the created music dataset, see https://github.com/vaslyb/MusicCro

    LSAT practicum: an application of human based computation

    Get PDF
    Human-based computation can be applied to solve problems too hard for a single computer. Crowdsourcing can be applied to ethical modeling by splitting ethical situations among humans. In this senior research project, the crowdsourcing method is applied to produce an ethical model for what web crawlers are allowed to do on websites. By evaluating questions about terms of use on a website, users provide context for the robots. An obstacle to this project is getting the right crowd to participate in the problem. The crowd of potential law students was selected as students typically answer questions to study for a major entrance test into law school. This tool can allow these students to practice legal analysis while letting them build to ethical web knowledge, which is in turn generated into robot-readable code in the form of the Robot Exclusion Protocol. The results were limited by the size of the crowd in this project
    • …
    corecore