1,450,066 research outputs found
Peer Review system: A Golden standard for publications process
Peer review process helps in evaluating and validating of research that is published in the journals. U.S. Office of Research Integrity reported that data fraudulence was found to be involved in 94% cases of misconduct from 228 identified articles between 1994–2012. If fraud in published article are significantly as high as reported, the question arise in mind, were these articles peer reviewed? Another report said that the reviewers failed to detect 16 cases of fabricated article of Jan Hendrick Schon. Superficial peer reviewing process does not reveals suspicion of misconduct. Lack of knowledge of systemic review process not only demolish the academic integrity in publication but also loss the trust of the people of the institution, the nation, and the world. The aim of this review article is to aware stakeholders specially novice reviewers about the peer review system. Beginners will understand how to review an article and they can justify better action choices in dealing with reviewing an article
Summary of Workshop to Review an OMB Report on Regulatory Risk Assessment and Management
Summary of the results of an invitational workshop conducted to peer review the 1990 OMB report, CURRENT REGULATORY ISSUES IN Risk ASSESSMENT AND Risk MANAGENMENTIN REGULATORY PROGRAM OF THE UNITED STATES GOVERNMENT, APRIL 1, 1990 - MARCH 31, 1991
Using peer review to enhance the quality of engineering laboratory reports
Peer review of third year bioprocess engineering laboratory reports was introduced in an
attempt to improve the standard of report writing in the BSc in Biotechnology degree
programme at DCU. Preliminary results suggest that the review process leads to improved
report writing skills. The student response to the initiative was very positive but it was
strongly felt that the process should be anonymous. On average, marks awarded by students
were higher than those awarded by the lecturer but there was a slight tendency to award more
extreme marks
Joint Report of Peer Review Panel for Numeric Nutrient Criteria for the Great Bay Estuary New Hampshire Department of Environmental Services June, 2009
This peer review was authorized through a collaborative agreement sponsored by the New Hampshire Department of Environmental Services (DES) and the Cities of Dover, Rochester and Portsmouth, New Hampshire. The purpose was to conduct an independent scientific peer review of the document entitled, “Numeric Nutrient Criteria for the Great Bay Estuary,” dated June, 2009 (DES 2009 Report)
Systematic analysis of agreement between metrics and peer review in the UK REF
When performing a national research assessment, some countries rely on
citation metrics whereas others, such as the UK, primarily use peer review. In
the influential Metric Tide report, a low agreement between metrics and peer
review in the UK Research Excellence Framework (REF) was found. However,
earlier studies observed much higher agreement between metrics and peer review
in the REF and argued in favour of using metrics. This shows that there is
considerable ambiguity in the discussion on agreement between metrics and peer
review. We provide clarity in this discussion by considering four important
points: (1) the level of aggregation of the analysis; (2) the use of either a
size-dependent or a size-independent perspective; (3) the suitability of
different measures of agreement; and (4) the uncertainty in peer review. In the
context of the REF, we argue that agreement between metrics and peer review
should be assessed at the institutional level rather than at the publication
level. Both a size-dependent and a size-independent perspective are relevant in
the REF. The interpretation of correlations may be problematic and as an
alternative we therefore use measures of agreement that are based on the
absolute or relative differences between metrics and peer review. To get an
idea of the uncertainty in peer review, we rely on a model to bootstrap peer
review outcomes. We conclude that particularly in Physics, Clinical Medicine,
and Public Health, metrics agree quite well with peer review and may offer an
alternative to peer review
Quantifying the quality of peer reviewers through Zipf's law
This paper introduces a statistical and other analysis of peer reviewers in
order to approach their "quality" through some quantification measure, thereby
leading to some quality metrics. Peer reviewer reports for the Journal of the
Serbian Chemical Society are examined. The text of each report has first to be
adapted to word counting software in order to avoid jargon inducing confusion
when searching for the word frequency: e.g. C must be distinguished, depending
if it means Carbon or Celsius, etc. Thus, every report has to be carefully
"rewritten". Thereafter, the quantity, variety and distribution of words are
examined in each report and compared to the whole set. Two separate months,
according when reports came in, are distinguished to observe any possible
hidden spurious effects. Coherence is found. An empirical distribution is
searched for through a Zipf-Pareto rank-size law. It is observed that peer
review reports are very far from usual texts in this respect. Deviations from
the usual (first) Zipf's law are discussed. A theoretical suggestion for the
"best (or worst) report" and by extension "good (or bad) reviewer", within this
context, is provided from an entropy argument, through the concept of "distance
to average" behavior. Another entropy-based measure also allows to measure the
journal reviews (whence reviewers) for further comparison with other journals
through their own reviewer reports.Comment: 28 pages; 8 Tables; 9 Figures; 39 references; prepared for and to be
published in Scientometric
Responding to, and learning from, peer review feedback.
What is peer review in research? Peer review is the process of assessing the scientific quality of a research proposal, research report and/or paper by an independent expert, usually an academic or clinical expert
- …
