16 research outputs found

    Do editors and referees look for signs of scientific misconduct when reviewing manuscripts? A quantitative content analysis of studies that examined review criteria and reasons for accepting and rejecting manuscripts for publication

    Get PDF
    The case of Dr. Hwang Woo Suk, the South Korean stem-cell researcher, is arguably the highest profile case in the history of research misconduct. The discovery of Dr. Hwang's fraud led to fierce criticism of the peer review process (at Science). To find answers to the question of why the journal peer review system did not detect scientific misconduct (falsification or fabrication of data) not only in the Hwang case but also in many other cases, an overview is needed of the criteria that editors and referees normally consider when reviewing a manuscript. Do they at all look for signs of scientific misconduct when reviewing a manuscript? We conducted a quantitative content analysis of 46 research studies that examined editors' and referees' criteria for the assessment of manuscripts and their grounds for accepting or rejecting manuscripts. The total of 572 criteria and reasons from the 46 studies could be assigned to nine main areas: (1) ‘relevance of contribution,' (2) ‘writing / presentation,' (3) ‘design / conception,' (4) ‘method / statistics,' (5) ‘discussion of results,' (6) ‘reference to the literature and documentation,' (7) ‘theory,' (8) ‘author's reputation / institutional affiliation,' and (9) ‘ethics.' None of the criteria or reasons that were assigned to the nine main areas refers to or is related to possible falsification or fabrication of data. In a second step, the study examined what main areas take on high and low significance for editors and referees in manuscript assessment. The main areas that are clearly related to the quality of the research underlying a manuscript emerged in the analysis frequently as important: ‘theory,' ‘design / conception' and ‘discussion of results.

    Predatory Publishing and the Psychology Behind it

    Get PDF
    This Editorial article discusses the publishing strategies of some journals, the authors' reactions to them and the quality of publishing

    Measuring the Institution's Footprint in the Web

    Get PDF
    Purpose: Our purpose is to provide an alternative, although complementary, system for the evaluation of the scholarly activities of academic organizations, scholars and researchers, based on web indicators, in order to speed up the change of paradigm in scholarly communication towards a new fully electronic 21st century model. Design/methodology/approach: In order to achieve these goals, a new set of web indicators has been introduced, obtained mainly from data gathered from search engines, the new mediators of scholarly communication. We found that three large groups of indicators are feasible to obtain and relevant for evaluation purposes: activity (web publication); impact (visibility) and usage (visits and visitors). Findings: As a proof of concept, a Ranking Web of Universities has been built with Webometrics data. There are two relevant findings: ranking results are similar to those obtained by other bibliometric-based rankings; and there is a concerning digital divide between North American and European universities, which appear in lower positions when compared with their US & Canada counterparts. Research limitations / implications: Cybermetrics is still an emerging discipline so new developments should be expected when more empirical data become available. Practical implications: The proposed approach suggests the publication of truly electronic journals, rather than digital versions of printed articles. Additional materials such as raw data and multimedia files should be included along with other relevant information arising from more informal activities. These repositories should be Open Access, available as part of the public Web, indexed by the main commercial search engines. We anticipate that these actions could generate larger Web-based audiences, reduce the costs of publication and access and allow third parties to take advantage of the knowledge generated, without sacrificing peer review, which should be extended (pre- & post-) & expanded (closed & open). Originality / value: A full taxonomy of web indicators is introduced for describing and evaluating research activities, academic organizations and individual scholars and scientists. Previous attempts for building such classification were more incomplete and not taking into account feasibility and efficiency

    Considerations on the Impact Factor as a Tool in Scientific Assessment

    Get PDF
    This exploratory study aims to wide the debate over the impact factor and its application as a tool in the assessment of science. The analysis of scientific activity has been the subject of much debate, especially in the last decade. The impact factor and the citation frequency have been regularly used in the individual analysis of researchers as well verify the performance of journals, investigators, research centers, graduate programs, institutions and countries. The impact factor of a journal can be highly influenced by few of its articles that are very frequently cited. Some researchers are more concerned about publishing in high impact journals than about doing science itself. The manner how the scientific research output is assessed by funding agencies, academic institutions, among others, has been the subject of much debate and the current great question is whether the impact factor (IF) is the best indicator to assess the scientific quality or an already obsolete measure. There is one movement of researchers against the impact factor to be used as a qualitative measure of research articles or even as an indicator of the contribution of a scientist for possible hiring, promotion or decision for grant. Then, there is a need of academic community deeply reflect on this important issue

    ASPECTOS ÉTICOS NAS PUBLICAÇÕES CIENTÍFICAS

    Get PDF
    This work discuss the ethical issue in the scientific and academic production, analizes situations which the conflict comes in the scientific comunities midst the actors and echoes in the whole way to research nowadays. It is mentioned examples found in the literature into different aspects considered misconduct: plagiarism, fabrication or falsification of data, nonpublication of data, authoring problems and issues related to copyright  and misconduct commited by journal referees/ editors. At the end, considerations are made about the subject, among them, that the topic is still not discussed between the students, teachers, in the school and society in general and, if so, several cases would be avoided certainly.Este trabalho discute a questão ética na produção científica e acadêmica, analisa situações em que o conflito surge no interior das comunidades científicas entre os atores e repercute em toda a forma de se fazer pesquisas na atualidade. São citados exemplos encontrados na literatura nos diversos aspectos considerados antiéticos: plágio, fabricação e falsificação de dados, não publicação de dados, problemas de autoria e as questões relacionadas aos direitos autorais e condutas antiéticas cometidas pelos revisores/editores das revistas.  Ao final, algumas considerações são feitas sobre o tema, entre as quais a de que é necessário discutir o assunto mais abertamente entre alunos, professores, nas escolas e na sociedade em geral, pois, se assim for, certamente muitos casos podem ser evitados

    Measuring the Institution's Footprint in the Web

    Get PDF
    Purpose: Our purpose is to provide an alternative, although complementary, system for the evaluation of the scholarly activities of academic organizations, scholars and researchers, based on web indicators, in order to speed up the change of paradigm in scholarly communication towards a new fully electronic 21st century model. Design/methodology/approach: In order to achieve these goals, a new set of web indicators has been introduced, obtained mainly from data gathered from search engines, the new mediators of scholarly communication. We found that three large groups of indicators are feasible to obtain and relevant for evaluation purposes: activity (web publication); impact (visibility) and usage (visits and visitors). Findings: As a proof of concept, a Ranking Web of Universities has been built with Webometrics data. There are two relevant findings: ranking results are similar to those obtained by other bibliometric-based rankings; and there is a concerning digital divide between North American and European universities, which appear in lower positions when compared with their US & Canada counterparts. Research limitations / implications: Cybermetrics is still an emerging discipline so new developments should be expected when more empirical data become available. Practical implications: The proposed approach suggests the publication of truly electronic journals, rather than digital versions of printed articles. Additional materials such as raw data and multimedia files should be included along with other relevant information arising from more informal activities. These repositories should be Open Access, available as part of the public Web, indexed by the main commercial search engines. We anticipate that these actions could generate larger Web-based audiences, reduce the costs of publication and access and allow third parties to take advantage of the knowledge generated, without sacrificing peer review, which should be extended (pre- & post-) & expanded (closed & open). Originality / value: A full taxonomy of web indicators is introduced for describing and evaluating research activities, academic organizations and individual scholars and scientists. Previous attempts for building such classification were more incomplete and not taking into account feasibility and efficiency

    Do Procedure Models Actually Guide Maturity Model Design? A Citation Analysis

    Get PDF
    More than a decade ago, guidelines for the development of maturity models were proposed in the form of procedure models. In theory, such procedure models provide scholars with guidance, but does the scientific community actually use them according to their intended purpose. This paper conducts a citation analysis and identifies an impressive number of citations. However, it is noteworthy that the publications are mainly cited for other reasons, such as the components or the general purposes of maturity models. The analysis also provides indications that many maturity models are developed without using a procedure model. Despite the fact that methodological rigor is considered a crucial criterion for publishing articles, maturity model designers might have concerns about using domain-specific procedure models. Future studies should address the reasons for this reluctance

    "On Hochberg et al.'s, the tragedy of the reviewers commons"

    Get PDF
    We discuss each of the recommendations made by Hochberg et al. (2009) to prevent the “tragedy of the reviewer commons”. Having scientific journals share a common database of reviewers would be to recreate a bureaucratic organization, where extra-scientific considerations prevailed. Pre-reviewing of papers by colleagues is a widespread practice but raises problems of coordination. Revising manuscripts in line with all reviewers’ recommendations presupposes that recommendations converge, which is acrobatic. Signing an undertaking that authors have taken into accounts all reviewers’ comments is both authoritarian and sterilizing. Sending previous comments with subsequent submissions to other journals amounts to creating a cartel and a single all-encompassing journal, which again is sterilizing. Using young scientists as reviewers is highly risky: they might prove very severe; and if they have not yet published themselves, the recommendation violates the principle of peer review. Asking reviewers to be more severe would only create a crisis in the publishing houses and actually increase reviewers’ workloads. The criticisms of the behavior of authors looking to publish in the best journals are unfair: it is natural for scholars to try to publish in the best journals and not to resign themselves to being second rate. Punishing lazy reviewers would only lower the quality of reports: instead, we favor the idea of paying reviewers “in kind” with, say, complimentary books or papers.Reviewer;Referee;Editor;Publisher;Publishing;Tragedy of the Commons;Hochberg

    Do editors and referees look for signs of scientific misconduct when reviewing manuscripts? : a quantitative content analysis of studies that examined review criteria and reasons for accepting and rejecting manuscripts for publication

    No full text
    The case of Dr. Hwang Woo Suk, the South Korean stem-cell researcher, is arguably the highest profile case in the history of research misconduct. The discovery of Dr. Hwang’s fraud led to fierce criticism of the peer review process (at Science). To find answers to the question of why the journal peer review system did not detect scientific misconduct (falsification or fabrication of data) not only in the Hwang case but also in many other cases, an overview is needed of the criteria that editors and referees normally consider when reviewing a manuscript. Do they at all look for signs of scientific misconduct when reviewing a manuscript? We conducted a quantitative content analysis of 46 research studies that examined editors’ and referees’ criteria for the assessment of manuscripts and their grounds for accepting or rejecting manuscripts. The total of 572 criteria and reasons from the 46 studies could be assigned to nine main areas: (1) ‘relevance of contribution,’ (2) ‘writing / presentation,’ (3) ‘design / conception,’ (4) ‘method / statistics,’ (5) ‘discussion of results,’ (6) ‘reference to the literature and documentation,’ (7) ‘theory,’ (8) ‘author’s reputation / institutional affiliation,’ and (9) ‘ethics.’ None of the criteria or reasons that were assigned to the nine main areas refers to or is related to possible falsification or fabrication of data. In a second step, the study examined what main areas take on high and low significance for editors and referees in manuscript assessment. The main areas that are clearly related to the quality of the research underlying a manuscript emerged in the analysis frequently as important: ‘theory,’ ‘design / conception’ and ‘discussion of results.

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table
    corecore