974,374 research outputs found

    Open-Ended Questions (Version 2.0)

    Get PDF
    Two aspects of open-ended survey questions are addressed in this contribution. The first aspect is the fielding of such questions: When, and for what purpose, are they useful? Who answers such questions, in the first place? And what should be taken into account when developing and designing open-ended questions? The second part of the article shows possible ways of evaluating open-ended questions. These include content analysis, which has a long tradition in the evaluation of open-ended questions. In addition, computer-supported, dictionary-based content analysis plays a major role as it is especially suitable for the analysis of responses to open questions because they are, as a rule, short and limited by the context of the question. Co-occurrence analysis, which can yield an overall picture of the responses, is a relatively new way of evaluating open-ended questions

    Evaluating Security and Usability of Profile Based Challenge Questions Authentication in Online Examinations

    Get PDF
    © 2014 Ullah et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.Student authentication in online learning environments is an increasingly challenging issue due to the inherent absence of physical interaction with online users and potential security threats to online examinations. This study is part of ongoing research on student authentication in online examinations evaluating the potential benefits of using challenge questions. The authors developed a Profile Based Authentication Framework (PBAF), which utilises challenge questions for students’ authentication in online examinations. This paper examines the findings of an empirical study in which 23 participants used the PBAF including an abuse case security analysis of the PBAF approach. The overall usability analysis suggests that the PBAF is efficient, effective and usable. However, specific questions need replacement with suitable alternatives due to usability challenges. The results of the current research study suggest that memorability, clarity of questions, syntactic variation and question relevance can cause usability issues leading to authentication failure. A configurable traffic light system was designed and implemented to improve the usability of challenge questions. The security analysis indicates that the PBAF is resistant to informed guessing in general, however, specific questions were identified with security issues. The security analysis identifies challenge questions with potential risks of informed guessing by friends and colleagues. The study was performed with a small number of participants in a simulation online course and the results need to be verified in a real educational context on a larger sample sizePeer reviewedFinal Published versio

    Evaluation of a tool for Java structural specification checking

    Get PDF
    Although a number of tools for evaluating Java code functionality and style exist, little work has been done in a distance learning context on automated marking of Java programs with respect to structural specifications. Such automated checks support human markers in assessing students’ work and evaluating their own marking; online automated marking; students checking code before submitting it for marking; and question setters evaluating the completeness of questions set. This project developed and evaluated a prototype tool that performs an automated check of a Java program’s correctness with respect to a structural specification. Questionnaires and interviews were used to gather feedback on the usefulness of the tool as a marking aid to humans, and on its potential usefulness to students for self-assessment when working on their assignments. Markers were asked to compare the usefulness of structural specification testing as compared to other kinds of support, including syntax error assistance, style checking and functionality testing. Initial results suggest that most markers using the structural specification checking tool found it to be useful, and some reported that it increased their accuracy in marking. Reasons for not using the tool included lack of time and the simplicity of the assignment it was trialled on. Some reservations were expressed about reliance on tools for assessment, both for markers and for students. The need for advice on incorporating tools in marking workflow is suggested

    Three Essays on the Economics of Evaluating Social Programs: Dissertation Summary

    Get PDF
    This dissertation consists of three essays on the evaluation of social programs. All three essays consider general evaluation questions in the specific context of evaluating the impact of government job training programs on the earnings of those who participate in them

    From local to global: extrapolating experiments

    Get PDF
    The use of randomised control trials (RCTs) in evaluating the design and efficacy of policies has exploded in the last decade. New papers appear every week. But while RCTs are quickly becoming the gold standard for impact evaluations in international development and aid interventions, questions persist about what the results of an RCT in one context can tell us about the probable results of similar programme implemented in another context. Indeed, such questions are not unique to RCT’s but apply to the full set of empirical tools that economists apply in estimating policy impacts and outcomes

    Evaluating Empirical Research Methods: Using Empirical Research in Law and Policy

    Get PDF
    I. Introduction . . . . . 777 II. Social Science Methods and Legal Questions . . . . . 779 A. Archival Research – The Nebraska Death Penalty Study . . . . . 780 B. Experimental Research – Death Qualification and Penalty Phase Instructions . . . . . 785 III. Obstacles to Evaluating Empirical Methodology . . . . . 789 A. Biased Assimilation . . . . . 790 B. Methodological Background . . . . . 795 IV. Strategies for Evaluating Empirical Research . . . . . 797 A. Explore the Tensions . . . . . 798 B. Research in Context . . . . . 799 C. Methodological Training . . . . . 801 V . Conclusion . . . . . 80
    • …
    corecore