315,627 research outputs found
Recommended from our members
Recognition and reward system for peer-reviewers
Peer reviewing plays an important role in the academic publishing process that scrutinizes and provides feedback on the scientific work prior to publication. Peer-reviewers put their efforts in reviewing others research work voluntarily, without any expectations of incentives or rewards. The peer-review process has been criticized for its defects like slowness, bias and abuse of the process. In this paper, we present a model to address these issues by using the approach of recording peer-review data on the blockchain. By using the semantic web and linked data technologies, this system would be able to expose its data and interact with other systems. This system will be used to quantify, recognize and incentivize the peer-reviewing efforts by researchers
Peer Review system: A Golden standard for publications process
Peer review process helps in evaluating and validating of research that is published in the journals. U.S. Office of Research Integrity reported that data fraudulence was found to be involved in 94% cases of misconduct from 228 identified articles between 1994ā2012. If fraud in published article are significantly as high as reported, the question arise in mind, were these articles peer reviewed? Another report said that the reviewers failed to detect 16 cases of fabricated article of Jan Hendrick Schon. Superficial peer reviewing process does not reveals suspicion of misconduct. Lack of knowledge of systemic review process not only demolish the academic integrity in publication but also loss the trust of the people of the institution, the nation, and the world. The aim of this review article is to aware stakeholders specially novice reviewers about the peer review system. Beginners will understand how to review an article and they can justify better action choices in dealing with reviewing an article
New frontiers of peer review
This news article introduces a new COST
Action entitled PEERE (TD1306), which stands for
New Frontiers of Peer Review (PEERE). PEERE is a
trans-domain proposal which brings together researchers from various different disciplines and science stake-holders for the purpose of reviewing the process of peer
review. PEERE officially began in May 2014 and will
end in May 2018. Thirty-one countries, including Malta,
are currently participating in the Action. In order to set
the context in which this COST Action was initiated,
we first look very briefly at the history of the process of
peer review and various models of peer review currently
in use. We then share what this COST Action hopes to
achieve.peer-reviewe
Anomalies in the peer-review system: A case study of the journal of High Energy Physics
Peer-review system has long been relied upon for bringing quality research to
the notice of the scientific community and also preventing flawed research from
entering into the literature. The need for the peer-review system has often
been debated as in numerous cases it has failed in its task and in most of
these cases editors and the reviewers were thought to be responsible for not
being able to correctly judge the quality of the work. This raises a question
"Can the peer-review system be improved?" Since editors and reviewers are the
most important pillars of a reviewing system, we in this work, attempt to
address a related question - given the editing/reviewing history of the editors
or re- viewers "can we identify the under-performing ones?", with citations
received by the edited/reviewed papers being used as proxy for quantifying
performance. We term such review- ers and editors as anomalous and we believe
identifying and removing them shall improve the performance of the peer- review
system. Using a massive dataset of Journal of High Energy Physics (JHEP)
consisting of 29k papers submitted between 1997 and 2015 with 95 editors and
4035 reviewers and their review history, we identify several factors which
point to anomalous behavior of referees and editors. In fact the anomalous
editors and reviewers account for 26.8% and 14.5% of the total editors and
reviewers respectively and for most of these anomalous reviewers the
performance degrades alarmingly over time.Comment: 25th ACM International Conference on Information and Knowledge
Management (CIKM 2016
Peer review: beyond the call of duty?
The number of manuscripts submitted to most scholarly journals has increased tremendously over the last few decades, and shows no sign of leveling off. Increasingly, a key challenge faced by editors of scientific journals like the International Journal of Nursing Studies (IJNS) is to secure peer reviews in a timely fashion for the manuscripts they handle. We hear from editors of some journals that it is not uncommon to have to issue 10ā15 invitations before one can secure the peer reviews needed to assess a given manuscript and although the IJNS generally fares better than this it is certainly true that a high proportion, probably a majority, of review invitations are declined.\ud
\ud
Most often, researchers declining invitations to review invoke the fact that they are too busy to add yet another item to their already overcommitted schedule. Some reviewers respond that administrators at their university or research center are actively discouraging them from engaging in an activity that seems to bear no tangible benefits. Yet, however one looks at it, peer reviewing is a crucial component of the publishing process. Nobody has yet come up with a viable alternative. Therefore, we need to find a way to convince our colleagues to peer review manuscripts more often. This can be done with a stick or with various types of carrots.\ud
\ud
One āstickā, occasionally envisaged by editors (e.g., Anon., 2009), is straightforward, at least to explain. For the peer-reviewing enterprise to function well, each researcher should be reviewing every year as many manuscripts as the number of reviews he or she is getting for his/her own papers. So, someone submitting 10 manuscripts in a given year should be willing to review 20 or 30 manuscripts during the same timeframe (assuming that each manuscript is reviewed by 2 or 3 individuals, as is commonly the case). If this person does not meet the required quota of reviews, there would be some restrictions imposed on the submission of any new manuscript for publication. Boehlert et al. (2009) have advocated such a āstickā in the case of the submission of grant proposals.\ud
\ud
However, the implementation of such an automatic accounting of reviewing activities is fraught with difficulties. For one thing, it would not prevent reviewers from defeating the system by writing short, useless reviews just to make the number. To eliminate that loophole, someone would have to assess whether reviews meet minimal standards of quality before they can be counted in the annual or running total. There would need to be allowances, for example to allow young researchers to get established in their career. This raises the prospect of a complex and potentially expensive system somewhat akin to carbon trading where credits for reviewing are granted and then traded with a verification system to ensure that no one cheats.\ud
\ud
An alternative approach, instead of sanctioning bad reviewing practices, would be to reward good ones. Currently the IJNS publishes the names of all reviewers annually. Other journals go a step further for example by giving awards to outstanding reviewers (Baveye et al., 2009). The lucky few who are so singled out by such awards see their reviewing efforts validated. But fundamentally, these awards do not change the unsupportive atmosphere in which researchers review manuscripts. The problem has to be attacked at its root, in the current culture of universities and research centers, where administrators tend to equate research productivity with the number of articles published and the amount of extramural funding brought in. Annual activity reports occasionally require individuals to mention the number of manuscripts or grant proposals reviewed, but these data are currently unverifiable, and therefore, are generally assumed not to matter at all for promotions or salary adjustments.\ud
\ud
There may be ways out of this difficulty. All the major publishers have information on who reviews what, how long reviewers take to respond to invitations, how long it takes them to send in their reviews. All it would take, in addition, would be for editors or associate editors who receive reviews to assess and record their usefulness, and one would have a very rich data set, which, if it were made available to universities and research centers in a way that preserves the anonymity of the peer-review process, could be used fruitfully to evaluate individualsā reviewing performance and impact. Of course, one would have to agree on what constitutes a āusefulā review. Pointing out typos and syntax errors in a manuscript is useful, but not hugely so. Identifying problems and offering ways to overcome them, proposing advice on how to analyze data better, or editing the text to increase its readability are all ways to make more substantial contributions. Generally, one might consider that there is a usefulness gradation from reviews focused on finding flaws in a manuscript to those focused on helping authors improve their text. Debate among scientists could result in a reliable set of guidelines on how to evaluate peer reviews.\ud
\ud
Beyond making statistics available to decision makers, other options are also available to raise the level of visibility and recognition of peer reviews (Baveye, 2010). Right or wrong, universities and research centers worldwide now rely more and more on some type of scientometric index, like the h-index (Hirsch, 2005), to evaluate the āimpactā of their researchers. In other cases, such as the UK, the basis on which institutions are funded is linked to schemes which have measures such as the impact factor at their core (Nolan et al., 2008 M. Nolan, C. Ingleton and M. Hayter, The research excellence framework (REF): a major impediment to free and informed debate?, International Journal of Nursing Studies 45 (4) (2008), pp. 487ā488. Article | PDF (202 K) | View Record in Scopus | Cited By in Scopus (4)Nolan et al., 2008). While many researchers see bibliometric analysis as a legitimate tool to explore discipline's activities and knowledge sources (see for example [Beckstead and Beckstead, 2006], [Oermann et al., 2008] and [Urquhart, 2006]), previous editorials in the IJNS have noted this trend and expressed disquiet at the distorting effect it could have on academic practice when used to pass judgments on quality ([Ketefian and Freda, 2009] and [Nolan et al., 2008]).\ud
\ud
Many of these indices implicitly encourage researchers to publish more articles, which in turn may detract researchers from engaging in peer reviewing. Certainly, none of the current indices encompass in any way the significant impact individuals can have on a discipline via their peer reviewing. But one could conceive of scientometric indexes that would include some measure of peer-reviewing impact, calculated on the basis of some of the data mentioned earlier. Clearly, such developments will not happen overnight. Before any of them can materialize, a necessary first step is for researchers to discuss with their campus administration, or the managers of their research institution, the crucial importance of peer reviewing and the need to have this activity valued in the same way that research, teaching, and outreach are. A debate along these lines is long overdue.\ud
\ud
Academic peer review is a necessary part of the publication process but while publication is recognised and valued, peer review is not. Even without the pressures of reward based on publication-based measures there is a potential for those less civic-minded authors to benefit from, but not contribute to, the peer-review system. Current scientometrics actively encourage and reward such behavior in a way that is, ultimately, not sustainable. Once administrators perceive that there is a need in this respect, are convinced that it will not cost a fortune to give peer reviewing more attention, and formulate a clear demand to librarians and publishers to help move things forward, there is hope that this perverse incentive in the current system can be removed. Otherwise the future of the current model of peer review looks bleak and we may indeed have to look forward to a complex bureaucratic system in which review credits are traded.\ud
\ud
For now, although the IJNS can count itself lucky because the problem affects this journal less than many others, in common with other journals we must thank our peer reviewers who are acting above and beyond the call of duty as it is perceived by many institutions. Without their efforts, journals like this cannot maintain their high standards. It is time for us to lend our weight to calls for a wide-ranging debate in order to ensure that these efforts are properly acknowledged and rewarded when judging the extent and quality of an academic's scientific contribution
Cooperation between Referees and Authors Increases Peer Review Accuracy
Peer review is fundamentally a cooperative process between scientists in a community who agree to review each other's work in an unbiased fashion. Peer review is the foundation for decisions concerning publication in journals, awarding of grants, and academic promotion. Here we perform a laboratory study of open and closed peer review based on an online game. We show that when reviewer behavior was made public under open review, reviewers were rewarded for refereeing and formed significantly more cooperative interactions (13% increase in cooperation, Pā=ā0.018). We also show that referees and authors who participated in cooperative interactions had an 11% higher reviewing accuracy rate (Pā=ā0.016). Our results suggest that increasing cooperation in the peer review process can lead to a decreased risk of reviewing errors
Recommended from our members
Enhancing Federal Agency Peer Review
Peer review, the process of reviewing a piece of scientific research for accuracy and reliability, is widely used in government. It serves an important function, allowing government agencies and other groups to assess the validity of research and identify any possible shortcomings. Unfortunately however, it is often not used effectively. This White Paper examines the use of peer review by federal agencies, with a particular focus on the U.S. Environmental Protection Agency. It discusses the existing federal regulatory framework for peer review and recommends various changes thereto designed to improve the effectiveness of peer review.
The White Paper was written by J.D. student Ian Petersen, with support from the University of Texas Regulatory Oversight Group (UTROG). UTROG is an unofficial organization at the University of Texas. It is comprised of law students who work with faculty to identify opportunities to enhance public participation in important federal and state regulatory programs. Its positions do not necessarily reflect the views of the administration of the KBH Center, the Law School, or the University of Texas.The Kay Bailey Hutchison Center for Energy, Law, and Busines
Social Simulation That 'Peers into Peer Review'
This article suggests to view peer review as a social interaction problem and shows reasons for social simulators to investigate it. Although essential for science, peer review is largely understudied and current attempts to reform it are not supported by scientific evidence. We suggest that there is room for social simulation to fill this gap by spotlighting social mechanisms behind peer review at the microscope and understanding their implications for the science system. In particular, social simulation could help to understand why voluntary peer review works at all, explore the relevance of social sanctions and reputational motives to increase the commitment of agents involved, cast light on the economic cost of this institution for the science system and understand the influence of signals and social networks in determining biases in the reviewing process. Finally, social simulation could help to test policy scenarios to maximise the efficacy and efficiency of various peer review schemes under specific circumstances and for everyone involved.Peer Review, Social Simulation, Social Norms, Selection Biases, Science Policy
- ā¦