548 research outputs found

    Differences between journals and years in the proportions of students, researchers and faculty registering Mendeley articles

    Get PDF
    This article contains two investigations into Mendeley reader counts with the same dataset. Mendeley reader counts provide evidence of early scholarly impact for journal articles, but reflect the reading of a relatively young subset of all researchers. To investigate whether this age bias is constant or varies by narrow field and publication year, this article compares the proportions of student, researcher and faculty readers for articles published 1996-2016 in 36 large monodisciplinary journals. In these journals, undergraduates recorded the newest research and faculty the oldest, with large differences between journals. The existence of substantial differences in the composition of readers between related fields points to the need for caution when using Mendeley readers as substitutes for citations for broad fields. The second investigation shows, with the same data, that there are substantial differences between narrow fields in the time taken for Scopus citations to be as numerous as Mendeley readers. Thus, even narrow field differences can impact on the relative value of Mendeley compared to citation counts

    Is Medical Research Informing Professional Practice More Highly Cited? Evidence from AHFS DI Essentials in Drugs.com

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 21/02/2017, available online: https://doi.org/10.1007/s11192-017-2292-3 The accepted version of the publication may differ from the final published version.Citation-based indicators are often used to help evaluate the impact of published medical studies, even though the research has the ultimate goal of improving human wellbeing. One direct way of influencing health outcomes is by guiding physicians and other medical professionals about which drugs to prescribe. A high profile source of this guidance is the AHFS DI Essentials product of the American Society of Health-System Pharmacists, which gives systematic information for drug prescribers. AHFS DI Essentials documents, which are also indexed by Drugs.com, include references to academic studies and the referenced work is therefore helping patients by guiding drug prescribing. This article extracts AHFS DI Essentials documents from Drugs.com and assesses whether articles referenced in these information sheets have their value recognised by higher Scopus citation counts. A comparison of mean log-transformed citation counts between articles that are and are not referenced in AHFS DI Essentials shows that AHFS DI Essentials references are more highly cited than average for the publishing journal. This suggests that medical research influencing drug prescribing is more cited than average

    Are Mendeley Reader Counts Useful Impact Indicators in all Fields?

    Get PDF
    Reader counts from the social reference sharing site Mendeley are known to be valuable for early research evaluation. They have strong correlations with citation counts for journal articles but appear about a year before them. There are disciplinary differences in the value of Mendeley reader counts but systematic evidence is needed at the level of narrow fields to reveal its extent. In response, this article compares Mendeley reader counts with Scopus citation counts for journal articles from 2012 in 325 narrow Scopus fields. Despite strong positive correlations in most fields, averaging 0.671, the correlations in some fields are as weak as 0.255. Technical reasons explain most weaker correlations, suggesting that the underlying relationship is almost always strong. The exceptions are caused by unusually high educational or professional use or topics of interest within countries that avoid Mendeley. The findings suggest that if care is taken then Mendeley reader counts can be used for early citation impact evidence in almost all fields and for related impact in some of the remainder. As an additional application of the results, cross-checking with Mendeley data can be used to identify indexing anomalies in citation databases

    COVID-19 publications: Database coverage, citations, readers, tweets, news, Facebook walls, Reddit posts

    Get PDF
    © 2020 The Authors. Published by MIT Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1162/qss_a_00066The COVID-19 pandemic requires a fast response from researchers to help address biological, medical and public health issues to minimize its impact. In this rapidly evolving context, scholars, professionals and the public may need to quickly identify important new studies. In response, this paper assesses the coverage of scholarly databases and impact indicators during 21 March to 18 April 2020. The rapidly increasing volume of research, is particularly accessible through Dimensions, and less through Scopus, the Web of Science, and PubMed. Google Scholar’s results included many false matches. A few COVID-19 papers from the 21,395 in Dimensions were already highly cited, with substantial news and social media attention. For this topic, in contrast to previous studies, there seems to be a high degree of convergence between articles shared in the social web and citation counts, at least in the short term. In particular, articles that are extensively tweeted on the day first indexed are likely to be highly read and relatively highly cited three weeks later. Researchers needing wide scope literature searches (rather than health focused PubMed or medRxiv searches) should start with Dimensions (or Google Scholar) and can use tweet and Mendeley reader counts as indicators of likely importance

    Early Mendeley readers correlate with later citation counts

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 26/03/2018, available online: https://doi.org/10.1007/s11192-018-2715-9 The accepted version of the publication may differ from the final published version.Counts of the number of readers registered in the social reference manager Mendeley have been proposed as an early impact indicator for journal articles. Although previous research has shown that Mendeley reader counts for articles tend to have a strong positive correlation with synchronous citation counts after a few years, no previous studies have compared early Mendeley reader counts with later citation counts. In response, this first diachronic analysis compares reader counts within a month of publication with citation counts after 20 months for ten fields. There were moderate or strong correlations in eight out of ten fields, with the two exceptions being the smallest categories (n=18, 36) with wide confidence intervals. The correlations are higher than the correlations between later citations and early citations, showing that Mendeley reader counts are more useful early impact indicators than citation counts

    Does Microsoft Academic find early citations?

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 27/10/2017, available online: https://doi.org/10.1007/s11192-017-2558-9 The accepted version of the publication may differ from the final published version.This article investigates whether Microsoft Academic can use its web search component to identify early citations to recently published articles to help solve the problem of delays in research evaluations caused by the need to wait for citation counts to accrue. The results for 44,398 articles in Nature, Science and seven library and information science journals 1996-2017 show that Microsoft Academic and Scopus citation counts are similar for all years, with no early citation advantage for either. In contrast, Mendeley reader counts are substantially higher for more recent articles. Thus, Microsoft Academic appears to be broadly like Scopus for citation count data, and is apparently not more able to take advantage of online preprints to find early citations

    U.S. academic libraries: understanding their web presence and their relationship with economic indicators

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11192-013-1001-0The main goal of this research is to analyze the web structure and performance of units and services belonging to U.S. academic libraries in order to check their suitability for webometric studies. Our objectives include studying their possible correlation with economic data and assessing their use for complementary evaluation purposes. We conducted a survey of library homepages, institutional repositories, digital collections, and online catalogs (a total of 374 URLs) belonging to the 100 U.S. universities with the highest total expenditures in academic libraries according to data provided by the National Center for Education Statistics. Several data points were taken and analyzed, including web variables (page count, external links, and visits) and economic variables (total expenditures, expenditures on printed and electronic books, and physical visits). The results indicate that the variety of URL syntaxes is wide, diverse and complex, which produces a misrepresentation of academic libraries’ web resources and reduces the accuracy of web analysis. On the other hand, institutional and web data indicators are not highly correlated. Better results are obtained by correlating total library expenditures with URL mentions measured by Google (r = 0.546) and visits measured by Compete (r = 0.573), respectively. Because correlation values obtained are not highly significant, we estimate such correlations will increase if users can avoid linkage problems (due to the complexity of URLs) and gain direct access to log files (for more accurate data about visits).Orduña Malea, E.; Regazzi, JJ. (2014). U.S. academic libraries: understanding their web presence and their relationship with economic indicators. Scientometrics. 98(1):315-336. doi:10.1007/s11192-013-1001-0S315336981Adecannby, J. (2011). Web link analysis of interrelationship between top ten African universities and world universities. Annals of library and information studies, 58(2), 128–138.Aguillo, I. F. (2009). Measuring the institutions’ footprint in the web. Library Hi Tech, 27(4), 540–556.Aguillo, I. F., Ortega, J. L., & FernĂĄndez, M. (2008). Webometric Ranking of World Universities: Introduction, methodology, and future developments. Higher education in Europe, 33(2/3), 234–244.Aguillo, I. F., Ortega, J. L., Fernandez, M., & Utrilla, A. M. (2010). Indicators for a webometric ranking of open Access repositories. Scientometrics, 82(3), 477–486.Arakaki, M., & Willet, P. (2009). Webometric analysis of departments of librarianship and information science: A follow-up study. Journal of information science, 35(2), 143–152.Arlitsch, K., & O’Brian, P. S. (2012). Invisible institutional repositories: Addresing the low indexing ratios of IR in Google Scholar. Library Hi Tech, 30(1), 60–81.Bar-Ilan, J. (1999). Search engine results over time—A case study on search engine stability”. Cybermetrics, 2/3. Retrieved February 18, 2013 from http://www.cindoc.csic.es/cybermetrics/articles/v2i1p1.html.Bar-Ilan, J. (2001). Data collection methods on the Web for informetric purposes: A review and analysis. Scientometrics, 50(1), 7–32.Bermejo, F. (2007). The internet audience: Constitution & measurement. New York: Peter Lang Pub Incorporated.Buigues-Garcia, M., & Gimenez-Chornet, V. (2012). Impact of Web 2.0 on national libraries. International Journal of Information Management, 32(1), 3–10.Chu, H., He, S., & Thelwall, M. (2002). Library and information science schools in Canada and USA: A Webometric perspective. Journal of education for Library and Information Science, 43(2), 110–125.Chua, Alton, Y. K., & Goh, D. H. (2010). A study of Web 2.0 applications in library websites. Library and Information Science Research, 32(3), 203–211.Gallego, I., GarcĂ­a, I.-M., & RodrĂ­guez, L. (2009). Universities’ websites: Disclosure practices and the revelation of financial information. The International Journal of Digital Accounting Research, 9(15), 153–192.Gomes, B. & Smith, B. T. (2003). Detecting query-specific duplicate documents. [Patent]. Retrieved February 18, 2013 from http://www.patents.com/Detecting-query-specific-duplicate-documents/US6615209/en-US .Harinarayana, N. S., & Raju, N. V. (2010). Web 2.0 features in university library web sites. Electronic Library, 28(1), 69–88.Lewandowski, D., Wahlig, H., & Meyer-Bautor, G. (2006). The freshness of web search engine databases. Journal of Information Science, 32(2), 131–148.Mahmood, K., & Richardson, J. V, Jr. (2012). Adoption of Web 2.0 in US academic libraries: A survey of ARL library websites. Program, 45(4), 365–375.Orduña-Malea, E., & Ontalba-RuipĂ©rez, J-A. (2012). Selective linking from social platforms to university websites: A case study of the Spanish academic system. Scientometrics. (in press).Ortega, J. L., & Aguillo, I. F. (2009). Mapping World-class universities on the Web. Information Processing and Management, 45(2), 272–279.Ortega, JosĂ© L. & Aguillo, Isidro F. (2009b). North America Academic Web Space: Multicultural Canada vs. The United States Homogeneity. In: ASIST & ISSI pre-conference symposium on informetrics and scientometrics.Phan, T., Hardesty, L., Sheckells, C., & George, A. (2009). Documentation for the academic libraries survey (ALS) public-use data file: Fiscal year 2008. Washington DC: National Center for Education Statistics. Institute of Education Sciences U.S. Department of Education.Qiu, J., Cheng, J., & Wang, Z. (2004). An analysis of backlinks counts and web impact factors for Chinese university websites. Scientometrics, 60(3), 463–473.Regazzi, J. J. (2012a). Constrained?—An analysis of U.S. Academic Libraries and shifts in spending, staffing and utilization, 1998–2008. College and Research Libraries, 73(5), 449–468.Regazzi, J. J. (2012b). Comparing Academic Library Spending with Public Libraries, Public K-12 Schools, Higher Education Public Institutions, and Public Hospitals Between 1998–2008. Journal of Academic Librarianship, 38(4), 205–216.Rousseau, R. (1999). Daily time series of common single word searches in AltaVista and NorthernLight. Cybermetrics, 2/3. Retrieved February 18, 2013 from http://www.cindoc.csic.es/cybermetrics/articles/v2i1p2.html .Sato, S., & Itsumura, H. (2011). How do people use open access papers in non-academic activities? A link analysis of papers deposited in institutional repositories. Library, Information and Media Studies, 9(1), 51–64.Scholze, F. (2007). Measuring research impact in an open access environment. Liber Quarterly: The Journal of European Research Libraries, 17(1–4), 220–232.Smith, A. G. (2011). Wikipedia and institutional repositories: An academic symbiosis? In: Proceedings of the ISSI 2011 conference. Durban, South Africa, 4–7 July 2011. Retrieved February 18, 2013 from http://www.vuw.ac.nz/staff/alastair_smith/publns/SmithAG2011_ISSI_paper.pdf .Smith, A.G. (2012). Webometric evaluation of institutional repositories. In: Proceedings of the 8th international conference on webometrics informetrics and scientometrics & 13th collnet meeting. Seoul (Korea), 722–729.Smith, A., & Thelwall, M. (2002). Web impact factors for Australasian Universities. Scientometrics, 54(3), 363–380.Tang, R., & Thelwall, M. (2008). A hyperlink analysis of US public and academic libraries’ web sites. Library Quarterly, 78(4), 419–435.Thelwall, M. (2008). Extracting accurate and complete results from search engines: Case study Windows Live. Journal of the American Society for Information Science and Technology, 59(1), 38–50.Thelwall, M. (2009). Introduction to webometrics: Quantitative web research for the social sciences. San Rafael: Morgan & Claypool.Thelwall, M., & Sud, P. (2011). A comparison of methods for collecting web citation data for academic organisations. Journal of the American Society for Information Science and Technology, 62(8), 1488–1497.Thelwall, M., Sud, P., & Wilkinson, D. (2012). Link and co-inlink network diagrams with URL citations or title mentions. Journal of the American Society for Information Science and Technology, 63(10), 1960–1972.Thelwall, M., & Zuccala, A. (2008). A University-centred European Union link analysis. Scientometrics, 75(3), 407–442.Uyar, A. (2009a). Google stemming mechanisms. Journal of Information Science, 35(5), 499–514.Uyar, A. (2009b). Investigation of the accuracy of search engine hit counts. Journal of Information Science, 35(4), 469–480.Zuccala, A., Thelwall, M., Oppenheim, C., & Dhiensa, R. (2007). Web intelligence analyses of digital libraries: A case study of the National Electronic Library for Health (NeLH). Journal of Documentation, 63(4), 558–589

    Emotional persistence in online chatting communities

    Get PDF
    How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional "tone" of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication.Comment: 34 pages, 4 main and 12 supplementary figure

    A Path Toward the Use of Trail Users’ Tweets to Assess Effectiveness of the Environmental Stewardship Scheme: An Exploratory Analysis of the Pennine Way National Trail

    Get PDF
    Large and unofficial data sets, for instance those gathered from social media, are increasingly being used in geographical research and explored as decision support tools for policy development. Social media data have the potential to provide new insight into phenomena about which there is little information from conventional sources. Within this context, this paper explores the potential of social media data to evaluate the aesthetic management of landscape. Specifically, this project utilises the perceptions of visitors to the Pennine Way National Trail, which passes through land managed under the Environmental Stewardship Scheme (ESS). The method analyses sentiment in trail users’ public Twitter messages (tweets) with the aim of assessing the extent to which the ESS maintains landscape character within the trail corridor. The method demonstrates the importance of filtering social media data to convert it into useful information. After filtering, the results are based on 161 messages directly related to the trail. Although small, this sample illustrates the potential for social media to be used as a cheap and increasingly abundant source of information. We suggest that social media data in this context should be seen as a resource that can complement, rather than replace, conventional data sources such as questionnaires and interviews. Furthermore, we provide guidance on how social media could be effectively used by conservation bodies, such as Natural England, which are charged with the management of areas of environmental value worldwide

    Devising a Resilience Rating System For Charities & The Non-Profit Sector

    Get PDF
    One of the sectoral issues that COVID has shone a light on is that whilst social investors, grant funders and sector support organisations acquire detailed data about the activities they have commissioned individually, they do not have access to a similar level of data about the wider sectors in which they are operating. ïżœThis report was written in the first 4 weeks of lockdown in 2020 and developed a framework for assessing financial resilience of Third Sector Organisations
    • 

    corecore