4,643 research outputs found

    Citation Analysis: A Comparison of Google Scholar, Scopus, and Web of Science

    Get PDF
    When faculty members are evaluated, they are judged in part by the impact and quality of their scholarly publications. While all academic institutions look to publication counts and venues as well as the subjective opinions of peers, many hiring, tenure, and promotion committees also rely on citation analysis to obtain a more objective assessment of an author’s work. Consequently, faculty members try to identify as many citations to their published works as possible to provide a comprehensive assessment of their publication impact on the scholarly and professional communities. The Institute for Scientific Information’s (ISI) citation databases, which are widely used as a starting point if not the only source for locating citations, have several limitations that may leave gaps in the coverage of citations to an author’s work. This paper presents a case study comparing citations found in Scopus and Google Scholar with those found in Web of Science (the portal used to search the three ISI citation databases) for items published by two Library and Information Science full-time faculty members. In addition, the paper presents a brief overview of a prototype system called CiteSearch, which analyzes combined data from multiple citation databases to produce citation-based quality evaluation measures

    Science Quality and the Value of Inventions

    Get PDF
    Despite decades of research, the relationship between the quality of science and the value of inventions has remained unclear. We present the result of a large-scale matching exercise between 4.8 million patent families and 43 million publication records. We find a strong positive relationship between quality of scientific contributions referenced in patents and the value of the respective inventions. We rank patents by the quality of the science they are linked to. Strikingly, high-rank patents are twice as valuable as low-rank patents, which in turn are about as valuable as patents without direct science link. We show this core result for various science quality and patent value measures. The effect of science quality on patent value remains relevant even when science is linked indirectly through other patents. Our findings imply that what is considered "excellent" within the science sector also leads to outstanding outcomes in the technological or commercial realm.Comment: 44 page

    Altmetrics for Digital Libraries: Concepts, Applications, Evaluation, and Recommendations

    Get PDF
    The volume of scientific literature is rapidly increasing, which has led to researchers becoming overloaded by the number of articles that they have available for reading and difficulties in estimating their quality and relevance (e.g., based on their research interests). Library portals, in these circumstances, are increasingly getting more relevant by using quality indicators that can help researchers during their research discovery process. Several evaluation methods (e.g., citations, Journal Impact Factor, and peer-reviews) have been used and suggested by library portals to help researchers filter out the relevant articles (e.g., articles that have received high citations) for their needs. However, in some cases, these methods have been criticized, and a number of weaknesses have been identified and discussed. For example, citations usually take a long time to appear, and some articles that are important can remain uncited. With the growing presence of social media today, new alternative indicators, known as “altmetrics,” have been encountered and proposed as complementary indicators to traditional measures (i.e., bibliometrics). They can help to identify the online attention received by articles, which might act as a further indicator for research assessment. One often mentioned advantage of these alternative indicators is, for example, that they appear much faster compared to citations. A large number of studies have explored altmetrics for different disciplines, but few studies have reported about altmetrics in the fields of Economics and Business Studies. Furthermore, no studies can be found so far that analyzed altmetrics within these disciplines with respect to libraries and information overload. Thus, this thesis explores opportunities for introducing altmetrics as new method for filtering relevant articles (in library portals) within the discipline of Economic and Business Studies literature. To achieve this objective, we have worked on four main aspects of investigating altmetrics and altmetrics data, respectively, of which the results can be used to fill the gap in this field of research. (1) We first highlight to what extent altmetric information from the two altmetric providers Mendeley and Altmetric.com is present within the journals of Economics and Business Studies. Based on the coverage, we demonstrate that altmetrics data are sparse in these disciplines, and when considering altmetrics data for real-world applications (e.g., in libraries), higher aggregation levels, such as journal level, can overcome their sparsity well. (2) We perform and discuss the correlations of citations on article and journal levels between different types and sources of altmetrics. We could show that Mendeley counts are positive and strongly correlated with citation counts on both article and journal levels, whereas other indicators such as Twitter counts and Altmetric Attention Score are significantly correlated only on journal level. With these correlations, we could suggest Mendeley counts for Economic and Business Studies journals/articles as an alternative indicator to citations. (3) In conjunction with the findings related to altmetrics in Economics and Business Studies journals, we discuss three use cases derived from three ZBW personas in terms of altmetrics. We investigate the use of altmetrics data for potential users with interests in new trends, social media platforms and journal rankings. (4) We investigated the behavior of economic researchers using a survey by exploring the usefulness of different altmetrics on journal level while they make decisions for selecting one article for reading. According to the user evaluation results, we demonstrate that altmetrics are not well known and understood by the economic community. However, this does not mean that these indicators are not helpful at all to economists. Instead, it brings forward the problem of how to introduce altmetrics to the economic community in the right way using which characteristics (e.g., as visible numbers attached at library records or behind the library’s relevance ranking system). Considering the aforementioned findings of this thesis, we can suggest several forms of presenting altmetric information in library portals, using EconBiz as the proof-of-concept, with the intention to assist both researchers and libraries to identify relevant journals or articles (e.g., highly mentioned online and recently published) for their need and to cope with the information overload

    Evaluating the online impact of reporting guidelines for randomised trial reports and protocols: a cross-sectional web-based data analysis of CONSORT and SPIRIT initiatives

    Get PDF
    Reporting guidelines are tools to help improve the transparency, completeness, and clarity of published articles in health research. Specifically, the CONSORT (Consolidated Standards of Reporting Trials) and SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statements provide evidence-based guidance on what to include in randomised trial articles and protocols to guarantee the efficacy of interventions. These guidelines are subsequently described and discussed in journal articles and used to produce checklists. Determining the online impact (i.e., number and type of links received) of these articles can provide insights into the dissemination of reporting guidelines in broader environments (web-at-large) than simply that of the scientific publications that cite them. To address the technical limitations of link analysis, here the Debug-Validate-Access-Find (DVAF) method is designed and implemented to measure different facets of the guidelines' online impact. A total of 65 articles related to 38 reporting guidelines are taken as a baseline, providing 240,128 URL citations, which are then refined, analysed, and categorised using the DVAF method. A total of 15,582 links to journal articles related to the CONSORT and SPIRIT initiatives were identified. CONSORT 2010 and SPIRIT 2013 were the reporting guidelines that received most links (URL citations) from other online objects (5328 and 2190, respectively). Overall, the online impact obtained is scattered (URL citations are received by different article URL IDs, mainly from link-based DOIs), narrow (limited number of linking domain names, half of articles are linked from fewer than 29 domain names), concentrated (links come from just a few academic publishers, around 60% from publishers), non-reputed (84% of links come from dubious websites and fake domain names) and highly decayed (89% of linking domain names were not accessible at the time of the analysis). In light of these results, it is concluded that the online impact of these guidelines could be improved, and a set of recommendations are proposed to this end.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.S

    Citations to arXiv Preprints by Indexed Journals and their Impact on Research Evaluation

    Full text link
    [EN] This article shows an approach to the study of two fundamental aspects of the prepublication of scientific manuscripts in specialized repositories (arXiv). The first refers to the size of the interaction of ¿standard papers¿ in journals appearing in the Web of Science (WoS) ¿ now Clarivate Analytics ¿ and ¿non-standard papers¿ (manuscripts appearing in arXiv). Specifically, we analyze the citations found in the WoS to articles in arXiv. The second aspect is how publication in arXiv affects the citation count of authors. The question is whether or not prepublishing in arXiv benefits authors from the point of view of increasing their citations, or rather produces a dispersion, which would diminish the relevance of their publications in evaluation processes. Data have been collected from arXiv, the websites of the journals, Google Scholar, and WoS following a specific ad hoc procedure. The number of citations in journal articles published in WoS to preprints in arXiv is not large. We show that citation counts from regular papers and preprints using different sources (arXiv, the journal¿s website, WoS) give completely different results. This suggests a rather scattered picture of citations that could distort the citation count of a given article against the author¿s interest. However, the number of WoS references to arXiv preprints is small, minimizing this potential negative effect.The work of the first, second, and third author was supported by Ministerio de Economía, Industria y Competitividad, Spain, under Research Grant CSO2015-65594-C2-1R Y 2R (MINECO/FEDER, UE). The work of the fourth author was supported by Ministerio de Economía, Industria y Competitividad, Spain, and FEDER, under Research Grant MTM2016-77054-C2-1-P. The authors would also like to thank the referees for their useful comments and references, which helped them to improve the work, especially in Section 5.Ferrer-Sapena, A.; Aleixandre-Benavent, R.; Peset Mancebo, MF.; Sánchez Pérez, EA. (2018). Citations to arXiv Preprints by Indexed Journals and their Impact on Research Evaluation. Journal of Information Science Theory and Practice (Online). 6(4):14-24. https://doi.org/10.1633/JISTaP.2018.6.4.2S14246

    Data citation and the citation graph

    Get PDF
    The citation graph is a computational artifact that is widely used to represent the domain of published literature. It represents connections between published works, such as citations and authorship. Among other things, the graph supports the computation of bibliometric measures such as h-indexes and impact factors. There is now an increasing demand that we should treat the publication of data in the same way that we treat conventional publications. In particular, we should cite data for the same reasons that we cite other publications. In this paper we discuss what is needed for the citation graph to represent data citation. We identify two challenges: to model the evolution of credit appropriately (through references) over time and to model data citation not only to a data set treated as a single object but also to parts of it. We describe an extension of the current citation graph model that addresses these challenges. It is built on two central concepts: citable units and reference subsumption. We discuss how this extension would enable data citation to be represented within the citation graph and how it allows for improvements in current practices for bibliometric computations, both for scientific publications and for data
    • …
    corecore