98,222 research outputs found

    Using PeerWise to support the transition to higher education

    Get PDF
    © 2019 Contributing Author

    How to Ask for Technical Help? Evidence-based Guidelines for Writing Questions on Stack Overflow

    Full text link
    Context: The success of Stack Overflow and other community-based question-and-answer (Q&A) sites depends mainly on the will of their members to answer others' questions. In fact, when formulating requests on Q&A sites, we are not simply seeking for information. Instead, we are also asking for other people's help and feedback. Understanding the dynamics of the participation in Q&A communities is essential to improve the value of crowdsourced knowledge. Objective: In this paper, we investigate how information seekers can increase the chance of eliciting a successful answer to their questions on Stack Overflow by focusing on the following actionable factors: affect, presentation quality, and time. Method: We develop a conceptual framework of factors potentially influencing the success of questions in Stack Overflow. We quantitatively analyze a set of over 87K questions from the official Stack Overflow dump to assess the impact of actionable factors on the success of technical requests. The information seeker reputation is included as a control factor. Furthermore, to understand the role played by affective states in the success of questions, we qualitatively analyze questions containing positive and negative emotions. Finally, a survey is conducted to understand how Stack Overflow users perceive the guideline suggestions for writing questions. Results: We found that regardless of user reputation, successful questions are short, contain code snippets, and do not abuse with uppercase characters. As regards affect, successful questions adopt a neutral emotional style. Conclusion: We provide evidence-based guidelines for writing effective questions on Stack Overflow that software engineers can follow to increase the chance of getting technical help. As for the role of affect, we empirically confirmed community guidelines that suggest avoiding rudeness in question writing.Comment: Preprint, to appear in Information and Software Technolog

    Defining and measuring training activity

    Get PDF

    Standardised library instruction assessment: an institution-specific approach

    Get PDF
    Introduction We explore the use of a psychometric model for locally-relevant, information literacy assessment, using an online tool for standardised assessment of student learning during discipline-based library instruction sessions. Method A quantitative approach to data collection and analysis was used, employing standardised multiple-choice survey questions followed by individual, cognitive interviews with undergraduate students. The assessment tool was administered to five general education psychology classes during library instruction sessions. AnalysisDescriptive statistics were generated by the assessment tool. Results. The assessment tool proved a feasible means of measuring student learning. While student scores improved on every survey question, there was uneven improvement from pre-test to post-test for different questions. Conclusion Student scores showed more improvement for some learning outcomes over others, thus, spending time on fewer concepts during instruction sessions would enable more reliable evaluation of student learning. We recommend using digital learning objects that address basic research skills to enhance library instruction programmes. Future studies will explore different applications of the assessment tool, provide more detailed statistical analysis of the data and shed additional light on the significance of overall scores

    Putting Pedagogy in the driving seat with Open Comment: an open source formative assessment feedback and guidance tool for History Students

    Get PDF
    One of the more challenging aspects in the current e-assessment milieu is to provide a set of electronic interactive tasks that will allow students more free text entry and provide immediate feedback to them. The specific objective of the project was to construct some simple tools in the form of Moodle extensions that allow a Moodle author to ask free-text response questions that can provide a degree of interactive formative feedback to students. In parallel with this was the aim to begin to develop a methodology for constructing such questions and their feedback effectively, together with techniques for constructing decision rules for giving feedback. Open Comment is a formative feedback technology designed to be integrated in the Moodle virtual learning environment. Put simply, it provides a simple system allowing questions to be written in Moodle, and for students' free text responses to these questions to be analysed and used to provide individually customised formative feedback

    An automatically built named entity lexicon for Arabic

    Get PDF
    We have successfully adapted and extended the automatic Multilingual, Interoperable Named Entity Lexicon approach to Arabic, using Arabic WordNet (AWN) and Arabic Wikipedia (AWK). First, we extract AWN’s instantiable nouns and identify the corresponding categories and hyponym subcategories in AWK. Then, we exploit Wikipedia inter-lingual links to locate correspondences between articles in ten different languages in order to identify Named Entities (NEs). We apply keyword search on AWK abstracts to provide for Arabic articles that do not have a correspondence in any of the other languages. In addition, we perform a post-processing step to fetch further NEs from AWK not reachable through AWN. Finally, we investigate diacritization using matching with geonames databases, MADA-TOKAN tools and different heuristics for restoring vowel marks of Arabic NEs. Using this methodology, we have extracted approximately 45,000 Arabic NEs and built, to the best of our knowledge, the largest, most mature and well-structured Arabic NE lexical resource to date. We have stored and organised this lexicon following the Lexical Markup Framework (LMF) ISO standard. We conduct a quantitative and qualitative evaluation of the lexicon against a manually annotated gold standard and achieve precision scores from 95.83% (with 66.13% recall) to 99.31% (with 61.45% recall) according to different values of a threshold
    corecore