163,101 research outputs found

    Introduction to British DiGRA issue

    Get PDF
    This special issue of ToDiGRA collects some of the best articles presented at the British DiGRA conference which took place at The University of Salford at MediaCityUK in May 2017. For this issue we invited the authors of the full papers presented at the conference to submit a revised document. The manuscripts were then sent to a group of selected peer-reviewers, three for each submission. The reviewers, experts in the field of study of the assigned papers, provided their feedback and evaluation. The five accepted papers were then sent back to their authors who implemented the required changes and re-submitted their final work. Each one of the articles in this issue is the result of a process of research and revision that took almost one year of work from the time of their original presentation at the 2017 British DiGRA conference

    Peer Review system: A Golden standard for publications process

    Get PDF
    Peer review process helps in evaluating and validating of research that is published in the journals. U.S. Office of Research Integrity reported that data fraudulence was found to be involved in 94% cases of misconduct from 228 identified articles between 1994–2012. If fraud in published article are significantly as high as reported, the question arise in mind, were these articles peer reviewed? Another report said that the reviewers failed to detect 16 cases of fabricated article of Jan Hendrick Schon. Superficial peer reviewing process does not reveals suspicion of misconduct. Lack of knowledge of systemic review process not only demolish the academic integrity in publication but also loss the trust of the people of the institution, the nation, and the world. The aim of this review article is to aware stakeholders specially novice reviewers about the peer review system. Beginners will understand how to review an article and they can justify better action choices in dealing with reviewing an article

    Technology Assisted Reviews: Finding the Last Few Relevant Documents by Asking Yes/No Questions to Reviewers

    Get PDF
    The goal of a technology-assisted review is to achieve high recall with low human effort. Continuous active learning algorithms have demonstrated good performance in locating the majority of relevant documents in a collection, however their performance is reaching a plateau when 80\%-90\% of them has been found. Finding the last few relevant documents typically requires exhaustively reviewing the collection. In this paper, we propose a novel method to identify these last few, but significant, documents efficiently. Our method makes the hypothesis that entities carry vital information in documents, and that reviewers can answer questions about the presence or absence of an entity in the missing relevance documents. Based on this we devise a sequential Bayesian search method that selects the optimal sequence of questions to ask. The experimental results show that our proposed method can greatly improve performance requiring less reviewing effort.Comment: This paper is accepted by SIGIR 201

    NIPS - Not Even Wrong? A Systematic Review of Empirically Complete Demonstrations of Algorithmic Effectiveness in the Machine Learning and Artificial Intelligence Literature

    Get PDF
    Objective: To determine the completeness of argumentative steps necessary to conclude effectiveness of an algorithm in a sample of current ML/AI supervised learning literature. Data Sources: Papers published in the Neural Information Processing Systems (NeurIPS, n\'ee NIPS) journal where the official record showed a 2017 year of publication. Eligibility Criteria: Studies reporting a (semi-)supervised model, or pre-processing fused with (semi-)supervised models for tabular data. Study Appraisal: Three reviewers applied the assessment criteria to determine argumentative completeness. The criteria were split into three groups, including: experiments (e.g real and/or synthetic data), baselines (e.g uninformed and/or state-of-art) and quantitative comparison (e.g. performance quantifiers with confidence intervals and formal comparison of the algorithm against baselines). Results: Of the 121 eligible manuscripts (from the sample of 679 abstracts), 99\% used real-world data and 29\% used synthetic data. 91\% of manuscripts did not report an uninformed baseline and 55\% reported a state-of-art baseline. 32\% reported confidence intervals for performance but none provided references or exposition for how these were calculated. 3\% reported formal comparisons. Limitations: The use of one journal as the primary information source may not be representative of all ML/AI literature. However, the NeurIPS conference is recognised to be amongst the top tier concerning ML/AI studies, so it is reasonable to consider its corpus to be representative of high-quality research. Conclusion: Using the 2017 sample of the NeurIPS supervised learning corpus as an indicator for the quality and trustworthiness of current ML/AI research, it appears that complete argumentative chains in demonstrations of algorithmic effectiveness are rare

    Enforcing public data archiving policies in academic publishing: A study of ecology journals

    Full text link
    To improve the quality and efficiency of research, groups within the scientific community seek to exploit the value of data sharing. Funders, institutions, and specialist organizations are developing and implementing strategies to encourage or mandate data sharing within and across disciplines, with varying degrees of success. Academic journals in ecology and evolution have adopted several types of public data archiving policies requiring authors to make data underlying scholarly manuscripts freely available. Yet anecdotes from the community and studies evaluating data availability suggest that these policies have not obtained the desired effects, both in terms of quantity and quality of available datasets. We conducted a qualitative, interview-based study with journal editorial staff and other stakeholders in the academic publishing process to examine how journals enforce data archiving policies. We specifically sought to establish who editors and other stakeholders perceive as responsible for ensuring data completeness and quality in the peer review process. Our analysis revealed little consensus with regard to how data archiving policies should be enforced and who should hold authors accountable for dataset submissions. Themes in interviewee responses included hopefulness that reviewers would take the initiative to review datasets and trust in authors to ensure the completeness and quality of their datasets. We highlight problematic aspects of these thematic responses and offer potential starting points for improvement of the public data archiving process.Comment: 35 pages, 1 figure, 1 tabl
    corecore