14 research outputs found

    HOW TO GET PAPERS PUBLISHED IN LEADING IS JOURNALS?

    Get PDF
    Journals are the most important vehicles for sharing research results. Some countries (such as Brazil, Chile and Portugal) are underrepresented in terms of originating papers published in top Information Systems journals. This theoretical paper aims to provide a roadmap signposting the key elements for a paper to meet the criteria for publication in the top Information Systems journals. Ten dimensions for critically reviewing Information Systems papers were identified in the literature. Considering the importance of having a paper published in a top journal, for both the author and for the institution to which he is affiliated, this paper might be used by researchers wishing to submit papers to top journals, as well as by editors and reviewers who might benefit by reflecting on the standards adopted in peer review systems

    Invited Paper: Editing Special Issues of JISE: Practical Guidance and Recommendations

    Get PDF
    The Journal of Information Systems Education (JISE) periodically publishes special issues on selected topics that are stimulating and highly relevant to its community of readers. This invited piece, written by three authors who collectively have substantial experience of editing special issues, provides practical advice and guidance aimed at their colleagues within the field, be they seasoned academics or up-and-coming junior faculty, who may be interested in taking on the role of lead guest editor for future special issues of JISE

    A Reliability-Generalization Study of Journal Peer Reviews: A Multilevel Meta-Analysis of Inter-Rater Reliability and Its Determinants

    Get PDF
    Background: This paper presents the first meta-analysis for the inter-rater reliability (IRR) of journal peer reviews. IRR is defined as the extent to which two or more independent reviews of the same scientific document agree. Methodology/Principal Findings: Altogether, 70 reliability coefficients (Cohen’s Kappa, intra-class correlation [ICC], and Pearson product-moment correlation [r]) from 48 studies were taken into account in the meta-analysis. The studies were based on a total of 19,443 manuscripts; on average, each study had a sample size of 311 manuscripts (minimum: 28, maximum: 1983). The results of the meta-analysis confirmed the findings of the narrative literature reviews published to date: The level of IRR (mean ICC/r 2 =.34, mean Cohen’s Kappa =.17) was low. To explain the study-to-study variation of the IRR coefficients, meta-regression analyses were calculated using seven covariates. Two covariates that emerged in the metaregression analyses as statistically significant to gain an approximate homogeneity of the intra-class correlations indicated that, firstly, the more manuscripts that a study is based on, the smaller the reported IRR coefficients are. Secondly, if the information of the rating system for reviewers was reported in a study, then this was associated with a smaller IRR coefficient than if the information was not conveyed. Conclusions/Significance: Studies that report a high level of IRR are to be considered less credible than those with a low level o

    Journals, repositories, peer review, non-peer review, and the future of scholarly communication

    Full text link
    Peer reviewed journals are a key part of the system by which academic knowledge is developed and communicated. Problems have often been noted, and alternatives proposed, but the journal system still survives. In this article I focus on problems relating to reliance on subject-specific journals and peer review. Contrary to what is often assumed, there are alternatives to the current system, some of which have only becoming viable since the rise of the world wide web. The market for academic ideas should be opened up by separating the publication service from the review service: the former would ideally be served by an open access, web-based repository system encompassing all disciplines, whereas the latter should be opened up to encourage non-peer reviews from different perspectives, user reviews, statistics reviews, reviews from the perspective of different disciplines, and so on. The possibility of multiple reviews of the same artefact should encourage competition between reviewing organizations and should make the system more responsive to the requirements of the differing audience groups. These possibilities offer the potential to make the academic system far more productive. Keywords: Academic journals, Open access, Peer review, Scholarly communication, Science communication.Comment: 22 page

    Is Law a Discipline? Forays into Academic Culture

    Get PDF
    This Article explores academic culture. It addresses the reluctance in academic circles to accord law the full stature of a discipline. It forms doubts that have been raised into a series of four criticisms. Each attacks an academic feature of law, inviting the question: Is law different from the rest of the university in a way damaging its stature as an academic discipline? The Article concludes that, upon careful examination of each criticism, none establishes a difference between law and other disciplines capable of damaging law’s stature

    Is Law a Discipline? Forays into Academic Culture

    Get PDF
    This Article explores academic culture. It addresses the reluctance in academic circles to accord law the full stature of a discipline. It forms doubts that have been raised into a series of four criticisms. Each attacks an academic feature of law, inviting the question: Is law different from the rest of the university in a way damaging its stature as an academic discipline? The Article concludes that, upon careful examination of each criticism, none establishes a difference between law and other disciplines capable of damaging law’s stature

    Towards a model for IS research methodology selection : the effect of epistemology choice on a consolidated research evaluation tool

    Get PDF
    Word processed copy.Includes bibliographical references (leaves 106-112).Information Systems research is, for want of a better word, inadequate. Whilst there is nothing wrong with the quantity of the output or the abilities of the researchers themselves, the irrelevance (to practitioners) of much of the research has rendered if largely incapable of serving and supporting the Information Systems industry, a task that should be considered its primary objective. This dissertation aims to partially address this issue by analysing the role that methodology and epistemology has to play in the production and publishing of Information Systems research. It does this by analysing the different epistemologies (positivism, interpretivism, and critical research) and then estimates the effect their respective selections will have on Information Systems research by measuring their impact on a consolidated measure created in this research

    Ethics in the University: Reflections on Responsible Scholarship

    Get PDF
    Contributions by Richard De George, Ann E. Cudd, Tanya Hartman, Michael Murray, James F. Daugherty, Charles Marsh, and William I. Woods.Supported in part by a grant from the Council of Graduate School

    Three empirical studies on the agreement of reviewers about the quality of software engineering experiments

    Get PDF
    ContextDuring systematic literature reviews it is necessary to assess the quality of empirical papers. Current guidelines suggest that two researchers should independently apply a quality checklist and any disagreements must be resolved. However, there is little empirical evidence concerning the effectiveness of these guidelines.AimsThis paper investigates the three techniques that can be used to improve the reliability (i.e. the consensus among reviewers) of quality assessments, specifically, the number of reviewers, the use of a set of evaluation criteria and consultation among reviewers. We undertook a series of studies to investigate these factors.MethodTwo studies involved four research papers and eight reviewers using a quality checklist with nine questions. The first study was based on individual assessments, the second study on joint assessments with a period of inter-rater discussion. A third more formal randomised block experiment involved 48 reviewers assessing two of the papers used previously in teams of one, two and three persons to assess the impact of discussion among teams of different size using the evaluations of the “teams” of one person as a control.ResultsFor the first two studies, the inter-rater reliability was poor for individual assessments, but better for joint evaluations. However, the results of the third study contradicted the results of Study 2. Inter-rater reliability was poor for all groups but worse for teams of two or three than for individuals.ConclusionsWhen performing quality assessments for systematic literature reviews, we recommend using three independent reviewers and adopting the median assessment. A quality checklist seems useful but it is difficult to ensure that the checklist is both appropriate and understood by reviewers. Furthermore, future experiments should ensure participants are given more time to understand the quality checklist and to evaluate the research papers
    corecore