12,522 research outputs found
How to Write a Good Paper in Computer Science and How Will It Be Measured by ISI Web of Knowledge
The academic world has come to place enormous weight on bibliometric measures to assess the value of scientific publications. Our paper has two major goals. First, we discuss the limits of numerical assessment tools as applied to computer science publications. Second, we give guidelines on how to write a good paper, where to submit the manuscript, and how to deal with the reviewing process. We report our experience as editors of International Journal of Computers Communications & Control (IJCCC). We analyze two important aspects of publishing: plagiarism and peer reviewing. As an example, we discuss the promotion assessment criteria used in the Romanian academic system. We express openly our concerns about how our work is evaluated, especially by the existent bibliometric products. Our conclusion is that we should combine bibliometric measures with human interpretation. Keywords: scientific publication, publication assessment, plagiarism, reviewing, bibliometric indices
Recommended from our members
Open science and modified funding lotteries can impede the natural selection of bad science.
Assessing scientists using exploitable metrics can lead to the degradation of research methods even without any strategic behaviour on the part of individuals, via 'the natural selection of bad science.' Institutional incentives to maximize metrics like publication quantity and impact drive this dynamic. Removing these incentives is necessary, but institutional change is slow. However, recent developments suggest possible solutions with more rapid onsets. These include what we call open science improvements, which can reduce publication bias and improve the efficacy of peer review. In addition, there have been increasing calls for funders to move away from prestige- or innovation-based approaches in favour of lotteries. We investigated whether such changes are likely to improve the reproducibility of science even in the presence of persistent incentives for publication quantity through computational modelling. We found that modified lotteries, which allocate funding randomly among proposals that pass a threshold for methodological rigour, effectively reduce the rate of false discoveries, particularly when paired with open science improvements that increase the publication of negative results and improve the quality of peer review. In the absence of funding that targets rigour, open science improvements can still reduce false discoveries in the published literature but are less likely to improve the overall culture of research practices that underlie those publications
Six Inversion Strategies for Avoiding Rejection in Academic Publishing: Lessons from the IS Discipline
The publication process in many academic disciplines, including in Information Systems (IS), can seem arduous and unpredictable, particularly for early career researchers. While the literature offers plentiful guidance on how to pursue a paper acceptance, this paper offers a crisp summary of common mistakes that lead to rejection and how to avoid them. We provide six actionable inversion strategies for avoiding common mistakes that often lead to paper rejection. Namely, when preparing a paper, we recommend you (1) abstain from methodological promiscuity and (2) never overclaim (but try not to underclaim either); When submitting a paper, it is a good idea to (3) steer clear of bootlicking and (3) avoid sloppiness; And, after receiving the reviews, you should (5) forego belligerence, and (6) stop flogging a dead horse. These inversion strategies can help early career researchers better navigate the review process, increasing the chances of their papers maturing, and helping to avoid mistakes that lower the chance of publishing in high quality IS journals
Decentralizing Science: Towards an Interoperable Open Peer Review Ecosystem using Blockchain
Science publication and its Peer Review system strongly rely on a few major industry players controlling most journals (e.g. Elsevier), databases (e.g. Scopus) and metrics (e.g. JCR Impact Factor), while keeping most articles behind paywalls. Critics to such system include concerns about fairness, quality, performance, cost, unpaid labor, transparency, and accuracy of the evaluation process. The Open Access movement has tried to provide free access to the published research articles, but most of the aforementioned issues remain. In such context, decentralized technologies such as blockchain offer an opportunity to experiment with new models for science production and dissemination relying on a decentralized infrastructure, aiming to tackle multiple of the current system shortcomings. This paper makes a proposal for an interoperable decentralized system for an open peer review ecosystem, relying on emerging distributed technologies such as blockchain and IPFS. Such system, named ``Decentralized Science'' (DecSci), aims to enable a decentralized reviewer reputation system, which relies on an Open Access by-design infrastructure, together with transparent governance processes. Two prototypes have been implemented: a proof-of-concept prototype to validate DecSci's technological feasibility, and a Minimum Viable Product (MVP) prototype co-designed with journal editors. In addition, three evaluations have been carried out: an exploratory survey to assess interest on the issues tackled, a set of interviews to confirm the main problems for editors, and another set of interviews to validate the MVP prototype. Additionally, the paper discusses the multiple interoperability challenges such proposal faces, including an architecture to tackle them. This work finishes with a review of some of the open challenges that this ambitious proposal may face
Crafting a Paper for Publication
The relationship between doing good research and getting the research published is not a causal one. At best, there is a correlation between the quality of a research paper and its being accepted for publication. A research paper\u27s becoming accepted for publication is ultimately a social process, which exists in addition to and is no less important than the content of the paper itself. In this article, I examine how the social process can influence the crafting of a paper for submission to a journal, and re-crafting it in the event that the journal\u27s editor asks for a revision
- …