3,707,710 research outputs found

    Questions of quality in repositories of open educational resources: a literature review

    Get PDF
    Open educational resources (OER) are teaching and learning materials which are freely available and openly licensed. Repositories of OER (ROER) are platforms that host and facilitate access to these resources. ROER should not just be designed to store this content – in keeping with the aims of the OER movement, they should support educators in embracing open educational practices (OEP) such as searching for and retrieving content that they will reuse, adapt or modify as needed, without economic barriers or copyright restrictions. This paper reviews key literature on OER and ROER, in order to understand the roles ROER are said or supposed to fulfil in relation to furthering the aims of the OER movement. Four themes which should shape repository design are identified, and the following 10 quality indicators (QI) for ROER effectiveness are discussed: featured resources; user evaluation tools; peer review; authorship of the resources; keywords of the resources; use of standardised metadata; multilingualism of the repositories; inclusion of social media tools; specification of the creative commons license; availability of the source code or original files. These QI form the basis of a method for the evaluation of ROER initiatives which, in concert with considerations of achievability and long-term sustainability, should assist in enhancement and development. Keywords: open educational resources; open access; open educational practice; repositories; quality assuranc

    Environmental Education Jeopardy

    Get PDF
    This game, based on 'Jeopardy', uses questions and answers about air quality and pollution; water conservation, quality, and testing; and waste and compost issues. Students divide into teams and pick questions from a set of cards which are color-coded by category and labeled with point values. A set of questions is provided. Educational levels: Primary elementary, Intermediate elementary, Middle school, High school

    Good, bad, or biased? Using best practices to improve the quality of your survey questions

    Get PDF
    Surveys can be an effective tool for gathering information from library users and assessing library services, yet the quality of the survey questions can make all the difference between a survey that is completed and one that is abandoned in indifference or frustration. The increased emphasis on user informed library assessment and the availability of free online survey tools combine to make the use of surveys very popular in libraries, but inexperienced survey writers are not typically aware of best practices in the social sciences for the format and syntax of survey question and response options. These widely used best practices are meant to ensure that survey questions are clear and understandable, produce unbiased responses in appropriate formats, and are ethical with respect to the user. Flawed survey questions may confuse and frustrate users, resulting in survey fatigue and low survey response rates, inaccurate or difficult to interpret results, and wasted time and effort for both the surveyor and surveyed. Learning best practices for writing effective survey questions will help librarians improve their survey outcomes while maintaining the goodwill of users who provide needed survey data. Survey planning and pretesting are addressed as critical components of survey development, and example good and bad questions give presentation attendees the opportunity to immediately apply the concepts discussed. This poster was presented at the Association of College and Research Libraries (ACRL) 2013 National Conference, Indianapolis, Indiana

    Right to Hearing in License Renewal Proceeding When Allegation is the Subject of Concurrent Rule-Making Proceeding

    Get PDF
    An overview of central model quality results is given. The focus is on the variance of transfer functions. We look in particular into two questions: (1) Can the variance be smaller than that obtained by direct prediction error/output error? and (2) Can closed loop experiments give estimates with lower variance than open loop ones? The answer to both questions is yes

    Crowdsourcing Multiple Choice Science Questions

    Full text link
    We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.Comment: accepted for the Workshop on Noisy User-generated Text (W-NUT) 201

    Crowdsourcing Universal Part-Of-Speech Tags for Code-Switching

    Full text link
    Code-switching is the phenomenon by which bilingual speakers switch between multiple languages during communication. The importance of developing language technologies for codeswitching data is immense, given the large populations that routinely code-switch. High-quality linguistic annotations are extremely valuable for any NLP task, and performance is often limited by the amount of high-quality labeled data. However, little such data exists for code-switching. In this paper, we describe crowd-sourcing universal part-of-speech tags for the Miami Bangor Corpus of Spanish-English code-switched speech. We split the annotation task into three subtasks: one in which a subset of tokens are labeled automatically, one in which questions are specifically designed to disambiguate a subset of high frequency words, and a more general cascaded approach for the remaining data in which questions are displayed to the worker following a decision tree structure. Each subtask is extended and adapted for a multilingual setting and the universal tagset. The quality of the annotation process is measured using hidden check questions annotated with gold labels. The overall agreement between gold standard labels and the majority vote is between 0.95 and 0.96 for just three labels and the average recall across part-of-speech tags is between 0.87 and 0.99, depending on the task.Comment: Submitted to Interspeech 201
    corecore