87 research outputs found

    Distribution and seasonality of the marine macrophytes from Antikyra Gulf (Viotia, Greece)

    Get PDF
    The Gulf of Antikyra (Viotia) with a bauxitic substrate was aggravated by wastes discharged from an Aluminium factory where the Parnassos bauxite is treated Seasonal collections of macrophytes from stations selected inside the Antikyra Gulf were carried out. 85 species of macroalgae were totally collected, out of which 16 belonged to Chlorophyceae, 12 to Phaeophyceae and 57 to Rhodophyceae. There has been no obvious difference in the qualitative composition of the macroalgae as we move away from the area where the wastes are being discharged. Moreover, the stations where depths are greater exhibit different qualitative composition that those with smaller depths. The presence of phanerogams and especially that of Halophila stipulacea, the Lessepsian immigrant, encountered for the first time in the Korinthiakos Gulf, was also evident. The biomass of the three phanerogams decreased with the order: Posidonia oceanica>Cymodocea nodosa>Halophila stipulacea. The biomass of C. nodosaand P. oceanica was higher in July, while that of H. stipulacea was lower in July and higher in March and September

    Google Scholar Metrics evolution: an analysis according to languages

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11192-013-1164-8In November 2012 the Google Scholar Metrics (GSM) journal rankings were updated, making it possible to compare bibliometric indicators in the ten languages indexed—and their stability—with the April 2012 version. The h-index and h-5 median of 1,000 journals were analysed, comparing their averages, maximum and minimum values and the correlation coefficient within rankings. The bibliometric figures grew significantly. In just seven and a half months the h-index of the journals increased by 15 % and the median h-index by 17 %. This growth was observed for all the bibliometric indicators analysed and for practically every journal. However, we found significant differences in growth rates depending on the language in which the journal is published. Moreover, the journal rankings seem to be stable between April and November, reinforcing the credibility of the data held by Google Scholar and the reliability of the GSM journal rankings, despite the uncontrolled growth of Google Scholar. Based on the findings of this study we suggest, firstly, that Google should upgrade its rankings at least semi-annually and, secondly, that the results should be displayed in each ranking proportionally to the number of journals indexed by language.Orduña Malea, E.; Delgado López-Cózar, E. (2014). Google Scholar Metrics evolution: an analysis according to languages. Scientometrics. 98(3):2353-2367. doi:10.1007/s11192-013-1164-8S23532367983Aguillo, & Isidro, F. (2012). Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics, 91(2), 343–351.Brewington, B. E., & Cybenko, G. (2000). How dynamic is the Web? Computer Networks, 33(1–6), 257–276.Chen, X. (2010). Google Scholar’s dramatic coverage improvement five years after debut. Serials Review, 36(4), 221–226.Cho, Y. & Garcia-Molina, H. (2000). The evolution of the web and implications for an incremental crawler. Proceedings of the 26th International Conference on very large data bases, 200–209.Costas, R., & Bordons, M. (2007). The h-index: advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1(3), 193–203.de Winter, J. C. F., Zadpoor, A. A., & Dodou, D. (2013). The expansion of Google Scholar versus Web of Science: a longitudinal study. Scientometrics. doi: 10.1007/s11192-013-1089-2 .Delgado López-Cózar, E., & Cabezas-Clavijo, A. (2012). Google Scholar Metrics: an unreliable tool for assessing scientific journals. El profesional de la información, 21(4), 419–427.Delgado López-Cózar, E., & Cabezas-Clavijo, A. (2013). Ranking journals: could Google Scholar metrics be an alternative to journal citation reports and Scimago journal ranks. Learned publishing, 26(2), 101–114.Fetterly, D., Manasse, M., Najork, M. & Wiener, J. (2003). A large scale study of the evolution of web pages. Proceedings of the Twelfth International Conference on World Wide Web, 669–678.Harzing, A.-W. (2013). A preliminary test of Google Scholar as a source for citation data: a longitudinal study of Nobel prize winners. Scientometrics, 94(3), 1057–1075.Jacsó, P. (2012). Google Scholar Metrics for Publications—The software and content feature of a new open access bibliometric service. Online Information Review, 36(4), 604–619.Koehler, W. (2002). Web page change and persistence-4-year longitudinal web study. Journal of the American Society for Information Science and Technology, 53(2), 162–171.Koehler, W (2004). A longitudinal study of Web pages continued a consideration of document persistence. Information Research, 9(2). http://informationr.net/ir/9-2/paper174.html . Accessed 1 Sep 2013.Kousha, K., & Thelwall, M. (2007). Google Scholar Citations and Google Web/URL citations: a multidiscipline exploratory analysis. Journal of the American Society for Information Science and Technology, 58(7), 1055–1065.Leydesdorff, L. (2012). World shares of publications of the USA, EU-27, and China compared and predicted using the new Web of Science interface versus Scopus. El profesional de la información, 21(1), 43–49.Neuhaus, C., Neuhaus, E., Asher, A., & Wrede, C. (2006). The depth and breadth of Google Scholar: An empirical study. Libraries and the Academy, 6(2), 127–141.Orduña-Malea, E., Serrano-Cobos, J., & Lloret-Romero, N. (2009). Las universidades públicas españolas en Google Scholar: presencia y evolución de su publicación académica web. El profesional de la información, 18(5), 493–500.Orduña-Malea, E., Serrano-Cobos, J., Ontalba-Ruipérez, J.-A., & Lloret-Romero, N. (2010). Presencia y visibilidad web de las universidades públicas españolas. Revista española de documentación científica, 33(2), 246–278.Ortega, J. L., Aguillo, I. F., & Prieto, J. A. (2006). Longitudinal study of contents and elements in the scientific Web environment. Journal of Information Science, 32(4), 344–351.Payne, N., & Thelwall, M. (2007). A longitudinal study of academic webs: growth and stabilization. Scientometrics, 71(3), 523–539

    Coverage of highly-cited documents in Google Scholar, Web of Science, and Scopus: a multidisciplinary comparison

    Get PDF
    This study explores the extent to which bibliometric indicators based on counts of highly-cited documents could be affected by the choice of data source. The initial hypothesis is that databases that rely on journal selection criteria for their document coverage may not necessarily provide an accurate representation of highly-cited documents across all subject areas, while inclusive databases, which give each document the chance to stand on its own merits, might be better suited to identify highly-cited documents. To test this hypothesis, an analysis of 2,515 highly-cited documents published in 2006 that Google Scholar displays in its Classic Papers product is carried out at the level of broad subject categories, checking whether these documents are also covered in Web of Science and Scopus, and whether the citation counts offered by the different sources are similar. The results show that a large fraction of highly-cited documents in the Social Sciences and Humanities (8.6%-28.2%) are invisible to Web of Science and Scopus. In the Natural, Life, and Health Sciences the proportion of missing highly-cited documents in Web of Science and Scopus is much lower. Furthermore, in all areas, Spearman correlation coefficients of citation counts in Google Scholar, as compared to Web of Science and Scopus citation counts, are remarkably strong (.83-.99). The main conclusion is that the data about highly-cited documents available in the inclusive database Google Scholar does indeed reveal significant coverage deficiencies in Web of Science and Scopus in several areas of research. Therefore, using these selective databases to compute bibliometric indicators based on counts of highly-cited documents might produce biased assessments in poorly covered areas.Alberto Martín-Martín enjoys a four-year doctoral fellowship (FPU2013/05863) granted by the Ministerio de Educación, Cultura, y Deportes (Spain)

    COVID-19 publications: Database coverage, citations, readers, tweets, news, Facebook walls, Reddit posts

    Get PDF
    © 2020 The Authors. Published by MIT Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1162/qss_a_00066The COVID-19 pandemic requires a fast response from researchers to help address biological, medical and public health issues to minimize its impact. In this rapidly evolving context, scholars, professionals and the public may need to quickly identify important new studies. In response, this paper assesses the coverage of scholarly databases and impact indicators during 21 March to 18 April 2020. The rapidly increasing volume of research, is particularly accessible through Dimensions, and less through Scopus, the Web of Science, and PubMed. Google Scholar’s results included many false matches. A few COVID-19 papers from the 21,395 in Dimensions were already highly cited, with substantial news and social media attention. For this topic, in contrast to previous studies, there seems to be a high degree of convergence between articles shared in the social web and citation counts, at least in the short term. In particular, articles that are extensively tweeted on the day first indexed are likely to be highly read and relatively highly cited three weeks later. Researchers needing wide scope literature searches (rather than health focused PubMed or medRxiv searches) should start with Dimensions (or Google Scholar) and can use tweet and Mendeley reader counts as indicators of likely importance

    Does Microsoft Academic find early citations?

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 27/10/2017, available online: https://doi.org/10.1007/s11192-017-2558-9 The accepted version of the publication may differ from the final published version.This article investigates whether Microsoft Academic can use its web search component to identify early citations to recently published articles to help solve the problem of delays in research evaluations caused by the need to wait for citation counts to accrue. The results for 44,398 articles in Nature, Science and seven library and information science journals 1996-2017 show that Microsoft Academic and Scopus citation counts are similar for all years, with no early citation advantage for either. In contrast, Mendeley reader counts are substantially higher for more recent articles. Thus, Microsoft Academic appears to be broadly like Scopus for citation count data, and is apparently not more able to take advantage of online preprints to find early citations

    Proposal for a multilevel university cybermetric analysis model

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11192-012-0868-5Universities’ online seats have gradually become complex systems of dynamic information where all their institutions and services are linked and potentially accessible. These online seats now constitute a central node around which universities construct and document their main activities and services. This information can be quantitative measured by cybermetric techniques in order to design university web rankings, taking the university as a global reference unit. However, previous research into web subunits shows that it is possible to carry out systemic web analyses, which open up the possibility of carrying out studies which address university diversity, necessary for both describing the university in greater detail and for establishing comparable ranking units. To address this issue, a multilevel university cybermetric analysis model is proposed, based on parts (core and satellite), levels (institutional and external) and sublevels (contour and internal), providing a deeper analysis of institutions. Finally the model is integrated into another which is independent of the technique used, and applied by analysing Harvard University as an example of use.Orduña Malea, E.; Ontalba Ruipérez, JA. (2013). Proposal for a multilevel university cybermetric analysis model. Scientometrics. 95(3):863-884. doi:10.1007/s11192-012-0868-5S863884953Acosta Márquez, T., Igartua Perosanz, J.J. & Gómez Isla, J. (2009). Páginas web de las universidades españolas. Enred: revista digital de la Universidad de Salamanca, 5 [online; discontinued].Aguillo, I. F. (1998). Hacia un concepto documental de sede web. El Profesional de la Información, 7(1–2), 45–46.Aguillo, I. F. (2009). Measuring the institutions’ footprint in the web. Library Hi Tech, 27(4), 540–556.Aguillo, I. F., Granadino, B., Ortega, J. L., & Prieto, J. A. (2006). Scientific research activity and communication measured with cybermetrics indicators. Journal of the American Society for Information Science and Technology, 57(10), 1296–1302.Aguillo, I. F., Ortega, J. L., & Fernández, M. (2008). Webometric Ranking of World Universities: introduction, methodology, and future developments. Higher Education in Europe, 33(2/3), 234–244.Ayan, N., Li, W.-S., & Kolak, O. (2002). Automatic extraction of logical domains in a web site. Data & Knowledge Engineering, 43(2), 179–205.Barjak, F., Li, X., & Thelwall, M. (2007). Which factors explain the Web impact of scientists’ personal homepages? Journal of the American Society for Information Science and Technology, 58(2), 200–211.Berners-Lee, T., & Fischetti, M. (2000). Tejiendo la Red. Madrid: Siglo XXI.Björneborn, L., & Ingwersen, P. (2004). Toward a basic framework for webometrics. Journal of the American Society for Information Science and Technology, 55(14), 1216–1227.Buenadicha, M., Chamorro, A., Miranda, F. J., & González, O. R. (2001). A new web assessment index: Spanish Universities Analysis. Internet Research, 11(3), 226–234.Castells, M. (2001). La galaxia Internet. Barcelona: Plaza y Janés.Chu, H., He, S., & Thelwall, M. (2002). Library and Information Science Schools in Canada and USA: a Webometric perspective. Journal of Education for Library and Information Science, 43(2), 110–125.Crowston, K., & Williams, M. (2000). Reproduced and Emergent Genres of Communication on the World Wide Web. The Information Society: an International Journal, 16(3), 201–215.Goldfarb, A. (2006). The (teaching) role of universities in the diffusion of the Internet. International Journal of Industrial Organization, 24(2), 203–225.Ingwersen, P. (1998). The calculation of web impact factors. Journal of Documentation, 54(2), 236–243.Katz, R. N. (2008a). The tower and the cloud: Higher education in the age of cloud computing. USA: Educause.Katz, R. N. (2008b). The gathering cloud: is this the end of the middle. In R. N. Katz (Ed.), The tower and the cloud: Higher education in the age of cloud computing (p. 2008). USA: Educause.Li, X. (2005). National and international university departmental Web site interlinking: a webometric analysis. [Unpublished doctoral dissertation]. Wolverhampton, UK: University of Wolverhampton.Li, X., Thelwall, M., Musgrove, P., & Wilkinson, D. (2003). The relationship between the links/Web Impact Factors of computer science departments in UK and their RAE (Research Assessment Exercise) ranking in 2001. Scientometrics, 57(2), 239–255.Middleton, I., McConnell, M., & Davidson, G. (1999). Presenting a model for the structure and content of a University World Wide Web site. Journal of Information Science, 25(3), 217–219.Orduña-Malea, E. (2012). Propuesta de un modelo de análisis redinformétrico multinivel para el estudio sistémico de las universidades españolas (2010). Valencia: Polytechnic University of Valencia.Ortega, J. L., & Aguillo, Isidro. F. (2007). La web académica española en el contexto del Espacio Europeo de Educación Superior: estudio exploratorio. El profesional de la información, 16(5), 417–425.Pareja, V. M., Ortega, J. L., Prieto, J. A., Arroyo, N., & Aguillo, I. F. (2005). Desarrollo y aplicación del concepto de sede web como unidad documental de análisis en Cibermetría. Jornadas Españolas de Documentación, 9, 325–340.Saorín, T. (2012). Arquitectura de la dispersión: gestionar los riesgos cíclicos de fragmentación de las webs corporativas. Anuario ThinkEPI, 6, 281–287.Tang, R., & Thelwall, M. (2003). U.S. academic departmental Web-site interlinking: disciplinary differences. Library & Information Science Research, 25(4), 437–458.Tang, R., & Thelwall, M. (2004). Patterns of national and international web inlinks to US academic departments: an analysis of disciplinary variations. Scientometrics, 60(3), 475–485.Thelwall, M. (2002a). A research and institutional size based model for national university Web site interlinking. Journal of Documentation, 58(6), 683–694.Thelwall, M. (2002b). Conceptualizing documentation on the Web: an evaluation of different heuristic-based models for counting links between university web sites. Journal of the American Society for Information Science and Technology, 53(12), 995–1005.Thelwall, M. (2003). Web use and peer interconnectivity metrics for academic Web sites. Journal of Information Science, 29(1), 11–20.Thelwall, M. (2009). Introduction to Webometrics: quantitative web research for the social sciences. San Rafael: Morgan & Claypool.Thelwall, M., & Harries, G. (2004a). Can personal Web pages that link to universities yield information about the wider dissemination of research? Journal of Information Science, 30(3), 243–256.Thelwall, M., & Harries, G. (2004b). Do better scholars’ Web publications have significantly higher online impact? Journal of American Society for Information Science and Technology, 55(2), 149–159.Thelwall, M., Vaughan, L., & Björneborn, L. (2005). Webometrics. Annual Review of Information Science and Technology, 39, 81–135.Thomas, O., & Willet, P. (2000). Webometric analysis of Departments of librarianship and information science. Journal of Information Science, 26(6), 421–428.Tíscar, L. (2009). El papel de la universidad en la construcción de su identidad digital. Revista de universidad y sociedad del conocimiento, 6(1), 15–21.Van Vught, F. A. (2009). Diversity and differentiation in higher education. In F. Van Vught (Ed.), Mapping the higher education landscape: toward a European classification of higher education (pp. 1–16). The Netherlands: Springer.Yolku, O. (2001). Use of news articles and announcements on official websites of universities. Turkish Online Journal of Educational Technology, 10(2), 287–296

    ResearchGate versus Google Scholar: Which finds more early citations?

    Get PDF
    ResearchGate has launched its own citation index by extracting citations from documents uploaded to the site and reporting citation counts on article profile pages. Since authors may upload preprints to ResearchGate, it may use these to provide early impact evidence for new papers. This article assesses the whether the number of citations found for recent articles is comparable to other citation indexes using 2675 recently-published library and information science articles. The results show that in March 2017, ResearchGate found less citations than did Google Scholar but more than both Web of Science and Scopus. This held true for the dataset overall and for the six largest journals in it. ResearchGate correlated most strongly with Google Scholar citations, suggesting that ResearchGate is not predominantly tapping a fundamentally different source of data than Google Scholar. Nevertheless, preprint sharing in ResearchGate is substantial enough for authors to take seriously
    corecore