17,237 research outputs found

    Enforcing public data archiving policies in academic publishing: A study of ecology journals

    Full text link
    To improve the quality and efficiency of research, groups within the scientific community seek to exploit the value of data sharing. Funders, institutions, and specialist organizations are developing and implementing strategies to encourage or mandate data sharing within and across disciplines, with varying degrees of success. Academic journals in ecology and evolution have adopted several types of public data archiving policies requiring authors to make data underlying scholarly manuscripts freely available. Yet anecdotes from the community and studies evaluating data availability suggest that these policies have not obtained the desired effects, both in terms of quantity and quality of available datasets. We conducted a qualitative, interview-based study with journal editorial staff and other stakeholders in the academic publishing process to examine how journals enforce data archiving policies. We specifically sought to establish who editors and other stakeholders perceive as responsible for ensuring data completeness and quality in the peer review process. Our analysis revealed little consensus with regard to how data archiving policies should be enforced and who should hold authors accountable for dataset submissions. Themes in interviewee responses included hopefulness that reviewers would take the initiative to review datasets and trust in authors to ensure the completeness and quality of their datasets. We highlight problematic aspects of these thematic responses and offer potential starting points for improvement of the public data archiving process.Comment: 35 pages, 1 figure, 1 tabl

    “Excellence R Us”: university research and the fetishisation of excellence

    Get PDF
    The rhetoric of “excellence” is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organisations, from art history to zoology. But does “excellence” actually mean anything? Does this pervasive narrative of “excellence” do any good? Drawing on a range of sources we interrogate “excellence” as a concept and find that it has no intrinsic meaning in academia. Rather it functions as a linguistic interchange mechanism. To investigate whether this linguistic function is useful we examine how the rhetoric of excellence combines with narratives of scarcity and competition to show that the hypercompetition that arises from the performance of “excellence” is completely at odds with the qualities of good research. We trace the roots of issues in reproducibility, fraud, and homophily to this rhetoric. But we also show that this rhetoric is an internal, and not primarily an external, imposition. We conclude by proposing an alternative rhetoric based on soundness and capacity-building. In the final analysis, it turns out that that “excellence” is not excellent. Used in its current unqualified form it is a pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship

    Examining Intersubjectivity in Social Knowledge Artifacts

    Get PDF

    Report on the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2)

    Get PDF
    This technical report records and discusses the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2). The report includes a description of the alternative, experimental submission and review process, two workshop keynote presentations, a series of lightning talks, a discussion on sustainability, and five discussions from the topic areas of exploring sustainability; software development experiences; credit & incentives; reproducibility & reuse & sharing; and code testing & code review. For each topic, the report includes a list of tangible actions that were proposed and that would lead to potential change. The workshop recognized that reliance on scientific software is pervasive in all areas of world-leading research today. The workshop participants then proceeded to explore different perspectives on the concept of sustainability. Key enablers and barriers of sustainable scientific software were identified from their experiences. In addition, recommendations with new requirements such as software credit files and software prize frameworks were outlined for improving practices in sustainable software engineering. There was also broad consensus that formal training in software development or engineering was rare among the practitioners. Significant strides need to be made in building a sense of community via training in software and technical practices, on increasing their size and scope, and on better integrating them directly into graduate education programs. Finally, journals can define and publish policies to improve reproducibility, whereas reviewers can insist that authors provide sufficient information and access to data and software to allow them reproduce the results in the paper. Hence a list of criteria is compiled for journals to provide to reviewers so as to make it easier to review software submitted for publication as a “Software Paper.

    Publishing Primary Data on the World Wide Web: Opencontext.org and an Open Future for the Past

    Get PDF
    More scholars are exploring forms of digital dissemination, including open access (OA) systems where content is made available free of charge. These include peer -reviewed e -journals as well as traditional journals that have an online presence. Besides SHA's Technical Briefs in Historical Archaeology, the American Journal of Archaeology now offers open access to downloadable articles from their printed issues. Similarly, Evolutionary Anthropology offers many full -text articles free for download. More archaeologists are also taking advantage of easy Web publication to post copies of their publications on personal websites. Roughly 15% of all scholars participate in such "self -archiving." To encourage this practice, Science Commons (2006) and the Scholarly Publishing and Academic Resources Coalition (SPARC) recently launched the Scholar Copyright Project, an initiative that will develop standard "Author Addenda" -- a suite of short amendments to attach to copyright agreements from publishers (http://sciencecommons. org/projects/publishing/index.html). These addenda make it easier for paper authors to retain and clarify their rights to self -archive their papers electronically. Several studies now clearly document that self -archiving and OA publication enhances uptake and citation rates (Hajjem et al. 2005). Researchers enhance their reputations and stature by opening up their scholarship.Mounting pressure for greater public access also comes from many research stakeholders. Granting foundations interested in maximizing the return on their investment in basic research are often encouraging and sometimes even requiring some form of OA electronic dissemination. Interest in maximizing public access to publicly financed research is catching on in Congress. A new bipartisan bill, the Federal Research Public Access Act, would require OA for drafts of papers that pass peer review and result from federally funded research (U.S. Congress 2006). The bill would create government -funded digital repositories that would host and maintain these draft papers. University libraries are some of the most vocal advocates for OA research. Current publishing frameworks have seen dramatically escalated costs, sometimes four times higher than the general rate of inflation (Create Change 2003). Increasing costs have forced many libraries to cancel subscriptions and thereby hurt access and scholarship (Association for College and Research Libraries 2003; Suber 2004).This article originally published in Technical Briefs In Historical Archaeology, 2007, 2: -11

    Negative Results in Computer Vision: A Perspective

    Full text link
    A negative result is when the outcome of an experiment or a model is not what is expected or when a hypothesis does not hold. Despite being often overlooked in the scientific community, negative results are results and they carry value. While this topic has been extensively discussed in other fields such as social sciences and biosciences, less attention has been paid to it in the computer vision community. The unique characteristics of computer vision, particularly its experimental aspect, call for a special treatment of this matter. In this paper, I will address what makes negative results important, how they should be disseminated and incentivized, and what lessons can be learned from cognitive vision research in this regard. Further, I will discuss issues such as computer vision and human vision interaction, experimental design and statistical hypothesis testing, explanatory versus predictive modeling, performance evaluation, model comparison, as well as computer vision research culture
    • 

    corecore