61,203 research outputs found

    Data sharing in sociology journals

    Full text link
    Purpose - Data sharing is key for replication and re-use in empirical research. Scientific journals can play a central role by establishing data policies and providing technologies. This paper analyzes the factors which influence data sharing by investigating journal data policies and the behavior of authors in sociology. Design/methodology/approach - The websites of 140 sociology journals were consulted to check their data policy. The results are compared with similar studies from political science and economics. A broad selection of articles published in five selected journals over a period of two years are examined to determine whether authors really cite and share their data and the factors which are related to this. Findings - Although only a few sociology journals have explicit data policies, most journals make reference to a common policy supplied by their association of publishers. Among the journals selected, relatively few articles provide data citations and even fewer make data available - this is true both for journals with and without data policy. But authors writing for journals with higher impact factors and with data policies are more likely to cite data and to make it really accessible. Originality/value - No study of journal data policies has been undertaken to date for the domain of sociology. A comparison of authors' behaviors regarding data availability, data citation, and data accessibility for journals with or without a data policy provides useful information about the factors which improve data sharing

    Statistical modelling of citation exchange among statistics journals

    Get PDF
    Scholarly journal rankings based on citation data are often met with skepticism by the scientific community. Part of the skepticism is due to the discrepancy between the common perception of journals' prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of authors. This paper focuses on analysis of the table of cross-citations among a selection of Statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care in order to avoid potential over-interpretation of insignificant differences between journal ratings

    A review of the literature on citation impact indicators

    Full text link
    Citation impact indicators nowadays play an important role in research evaluation, and consequently these indicators have received a lot of attention in the bibliometric and scientometric literature. This paper provides an in-depth review of the literature on citation impact indicators. First, an overview is given of the literature on bibliographic databases that can be used to calculate citation impact indicators (Web of Science, Scopus, and Google Scholar). Next, selected topics in the literature on citation impact indicators are reviewed in detail. The first topic is the selection of publications and citations to be included in the calculation of citation impact indicators. The second topic is the normalization of citation impact indicators, in particular normalization for field differences. Counting methods for dealing with co-authored publications are the third topic, and citation impact indicators for journals are the last topic. The paper concludes by offering some recommendations for future research

    Understanding the relevance of national culture in international business research: a quantitative analysis

    Get PDF
    This review is a comprehensive quantitative analysis of the International Business literature whose focus is on national culture. The analysis relies on a broad range of bibliometric techniques as productivity rankings, citation analysis (individual and cumulative), study of collaborative research patterns, and analysis of the knowledge base. It provides insights on (I) faculty and institutional research productivity and performance; (II) articles, institutions, and scholars’ influence in the contents of the field and its research agenda; and (III) national and international collaborative research trends. The study also explores the body of literature that has exerted the greatest impact on the researched set of selected articles.info:eu-repo/semantics/publishedVersio

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table

    The relevance of the ‘h’ and ‘g’ index to economics in the context of a nation-wide research evaluation scheme: The New Zealand case

    Get PDF
    The purpose of this paper is to explore the relevance of the citation-based ‘h’ and ‘g’ indexes as a means for measuring research output in economics. This study is unique in that it is the first to utilize the ‘h’ and ‘g’ indexes in the context of a time limited evaluation period and to provide comprehensive coverage of all academic economists in all university-based economics departments within a nation state. For illustration purposes we have selected New Zealand’s Performance Based Research Fund (PBRF) as our evaluation scheme. In order to provide a frame of reference for ‘h’ and ‘g’ index output measures, we have also estimated research output using a number of journal-based weighting schemes. In general, our findings suggest that ‘h’ and ‘g’ index scores are strongly associated with low-powered journal ranking schemes and weakly associated with high powered journal weighting schemes. More specifically, we found the ‘h’ and ‘g’ indexes to suffer from a lack of differentiation: for example, 52 percent of all participants received a score of zero under both measures, and 92 and 89 percent received scores of two or less under ‘h’ and ‘g’, respectively. Overall, our findings suggest that ‘h’ and ‘g’ indexes should not be incorporated into a PBRF-like framework

    The assessment of science: the relative merits of post- publication review, the impact factor, and the number of citations

    Get PDF
    The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative

    Citation Statistics

    Full text link
    This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using ``simple and objective'' methods is increasingly prevalent today. The ``simple and objective'' methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.Comment: This paper commented in: [arXiv:0910.3532], [arXiv:0910.3537], [arXiv:0910.3543], [arXiv:0910.3546]. Rejoinder in [arXiv:0910.3548]. Published in at http://dx.doi.org/10.1214/09-STS285 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore