81 research outputs found

    Does Microsoft Academic find early citations?

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 27/10/2017, available online: https://doi.org/10.1007/s11192-017-2558-9 The accepted version of the publication may differ from the final published version.This article investigates whether Microsoft Academic can use its web search component to identify early citations to recently published articles to help solve the problem of delays in research evaluations caused by the need to wait for citation counts to accrue. The results for 44,398 articles in Nature, Science and seven library and information science journals 1996-2017 show that Microsoft Academic and Scopus citation counts are similar for all years, with no early citation advantage for either. In contrast, Mendeley reader counts are substantially higher for more recent articles. Thus, Microsoft Academic appears to be broadly like Scopus for citation count data, and is apparently not more able to take advantage of online preprints to find early citations

    Google Scholar Metrics evolution: an analysis according to languages

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11192-013-1164-8In November 2012 the Google Scholar Metrics (GSM) journal rankings were updated, making it possible to compare bibliometric indicators in the ten languages indexed—and their stability—with the April 2012 version. The h-index and h-5 median of 1,000 journals were analysed, comparing their averages, maximum and minimum values and the correlation coefficient within rankings. The bibliometric figures grew significantly. In just seven and a half months the h-index of the journals increased by 15 % and the median h-index by 17 %. This growth was observed for all the bibliometric indicators analysed and for practically every journal. However, we found significant differences in growth rates depending on the language in which the journal is published. Moreover, the journal rankings seem to be stable between April and November, reinforcing the credibility of the data held by Google Scholar and the reliability of the GSM journal rankings, despite the uncontrolled growth of Google Scholar. Based on the findings of this study we suggest, firstly, that Google should upgrade its rankings at least semi-annually and, secondly, that the results should be displayed in each ranking proportionally to the number of journals indexed by language.Orduña Malea, E.; Delgado López-Cózar, E. (2014). Google Scholar Metrics evolution: an analysis according to languages. Scientometrics. 98(3):2353-2367. doi:10.1007/s11192-013-1164-8S23532367983Aguillo, & Isidro, F. (2012). Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics, 91(2), 343–351.Brewington, B. E., & Cybenko, G. (2000). How dynamic is the Web? Computer Networks, 33(1–6), 257–276.Chen, X. (2010). Google Scholar’s dramatic coverage improvement five years after debut. Serials Review, 36(4), 221–226.Cho, Y. & Garcia-Molina, H. (2000). The evolution of the web and implications for an incremental crawler. Proceedings of the 26th International Conference on very large data bases, 200–209.Costas, R., & Bordons, M. (2007). The h-index: advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1(3), 193–203.de Winter, J. C. F., Zadpoor, A. A., & Dodou, D. (2013). The expansion of Google Scholar versus Web of Science: a longitudinal study. Scientometrics. doi: 10.1007/s11192-013-1089-2 .Delgado López-Cózar, E., & Cabezas-Clavijo, A. (2012). Google Scholar Metrics: an unreliable tool for assessing scientific journals. El profesional de la información, 21(4), 419–427.Delgado López-Cózar, E., & Cabezas-Clavijo, A. (2013). Ranking journals: could Google Scholar metrics be an alternative to journal citation reports and Scimago journal ranks. Learned publishing, 26(2), 101–114.Fetterly, D., Manasse, M., Najork, M. & Wiener, J. (2003). A large scale study of the evolution of web pages. Proceedings of the Twelfth International Conference on World Wide Web, 669–678.Harzing, A.-W. (2013). A preliminary test of Google Scholar as a source for citation data: a longitudinal study of Nobel prize winners. Scientometrics, 94(3), 1057–1075.Jacsó, P. (2012). Google Scholar Metrics for Publications—The software and content feature of a new open access bibliometric service. Online Information Review, 36(4), 604–619.Koehler, W. (2002). Web page change and persistence-4-year longitudinal web study. Journal of the American Society for Information Science and Technology, 53(2), 162–171.Koehler, W (2004). A longitudinal study of Web pages continued a consideration of document persistence. Information Research, 9(2). http://informationr.net/ir/9-2/paper174.html . Accessed 1 Sep 2013.Kousha, K., & Thelwall, M. (2007). Google Scholar Citations and Google Web/URL citations: a multidiscipline exploratory analysis. Journal of the American Society for Information Science and Technology, 58(7), 1055–1065.Leydesdorff, L. (2012). World shares of publications of the USA, EU-27, and China compared and predicted using the new Web of Science interface versus Scopus. El profesional de la información, 21(1), 43–49.Neuhaus, C., Neuhaus, E., Asher, A., & Wrede, C. (2006). The depth and breadth of Google Scholar: An empirical study. Libraries and the Academy, 6(2), 127–141.Orduña-Malea, E., Serrano-Cobos, J., & Lloret-Romero, N. (2009). Las universidades públicas españolas en Google Scholar: presencia y evolución de su publicación académica web. El profesional de la información, 18(5), 493–500.Orduña-Malea, E., Serrano-Cobos, J., Ontalba-Ruipérez, J.-A., & Lloret-Romero, N. (2010). Presencia y visibilidad web de las universidades públicas españolas. Revista española de documentación científica, 33(2), 246–278.Ortega, J. L., Aguillo, I. F., & Prieto, J. A. (2006). Longitudinal study of contents and elements in the scientific Web environment. Journal of Information Science, 32(4), 344–351.Payne, N., & Thelwall, M. (2007). A longitudinal study of academic webs: growth and stabilization. Scientometrics, 71(3), 523–539

    An enhanced model for digital reference services

    Get PDF
    Digital Reference Service (DRS) play a vital role in the Digital Library (DL) research. DRS is a very valuable service provided by DL. Unfortunately, the reference service movement towards digital environment begins late, and this shift was not model based. So, a journey towards a digital environment without following a proper model raises some issues. A few researchers presented a general process model (GPM) in the late 1990s, but this process model could not overcome the problems of DRS. This paper proposes an enhanced model for DRS that use the storage and re-use mechanism with other vital components like DRS search engine and ready reference for solving the issues in DRS. Initially, storage and re-use mechanism are designed and finally, DRS search engine is designed to search appropriate answers in the knowledge base. We improved the GPM by incorporating the new components. The simulation results clearly states that the proposed model increased the service efficiency by reducing the response time from days to seconds for repeated questions and decreased the workload of librarian

    Developing search strategies for clinical practice guidelines in SUMSearch and Google Scholar and assessing their retrieval performance

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Information overload, increasing time constraints, and inappropriate search strategies complicate the detection of clinical practice guidelines (CPGs). The aim of this study was to provide clinicians with recommendations for search strategies to efficiently identify relevant CPGs in SUMSearch and Google Scholar.</p> <p>Methods</p> <p>We compared the retrieval efficiency (retrieval performance) of search strategies to identify CPGs in SUMSearch and Google Scholar. For this purpose, a two-term GLAD (GuideLine And Disease) strategy was developed, combining a defined CPG term with a specific disease term (MeSH term). We used three different CPG terms and nine MeSH terms for nine selected diseases to identify the most efficient GLAD strategy for each search engine. The retrievals for the nine diseases were pooled. To compare GLAD strategies, we used a manual review of all retrievals as a reference standard. The CPGs detected had to fulfil predefined criteria, e.g., the inclusion of therapeutic recommendations. Retrieval performance was evaluated by calculating so-called diagnostic parameters (sensitivity, specificity, and "Number Needed to Read" [NNR]) for search strategies.</p> <p>Results</p> <p>The search yielded a total of 2830 retrievals; 987 (34.9%) in Google Scholar and 1843 (65.1%) in SUMSearch. Altogether, we found 119 unique and relevant guidelines for nine diseases (reference standard). Overall, the GLAD strategies showed a better retrieval performance in SUMSearch than in Google Scholar. The performance pattern between search engines was similar: search strategies including the term "guideline" yielded the highest sensitivity (SUMSearch: 81.5%; Google Scholar: 31.9%), and search strategies including the term "practice guideline" yielded the highest specificity (SUMSearch: 89.5%; Google Scholar: 95.7%), and the lowest NNR (SUMSearch: 7.0; Google Scholar: 9.3).</p> <p>Conclusion</p> <p>SUMSearch is a useful tool to swiftly gain an overview of available CPGs. Its retrieval performance is superior to that of Google Scholar, where a search is more time consuming, as substantially more retrievals have to be reviewed to detect one relevant CPG. In both search engines, the CPG term "guideline" should be used to obtain a comprehensive overview of CPGs, and the term "practice guideline" should be used if a less time consuming approach for the detection of CPGs is desired.</p
    corecore