507 research outputs found
Proof over promise: towards a more inclusive ranking of Dutch academics in Economics & Business
The Dutch Economics top-40, based on publications in ISI listed journals, is - to the best of our knowledge - the oldest ranking of individual academics in Economics and is well accepted in the Dutch academic community. However, this ranking is based on publication volume, rather than on the actual impact of the publications in question. This paper therefore uses two relatively new metrics, the citations per author per year (CAY) metric and the individual annual h-index (hIa) to provide two alternative, citation-based, rankings of Dutch academics in Economics & Business. As a data source, we use Google Scholar instead of ISI to provide a more comprehensive measure of impact, including citations to and from publications in non-ISI listed journals, books, working and conference papers.
The resulting rankings are shown to be substantially different from the original ranking based on publications. Just like other research metrics, the CAY or hIa-index should never be used as the sole criterion to evaluate academics. However, we do argue that the hIa-index and the related citations per author per year metric provide an important additional perspective over and above a ranking based on publications in high impact journals alone. Citation-based rankings are also shown to inject a higher level of diversity in terms of age, gender, discipline and academic affiliation and thus appear to be more inclusive of a wider range of scholarship
Do we need to distance ourselves from the distance concept? Why home and host country context might matter more than (cultural) distance
We scrutinize the explanatory power of one of the key concepts in International Business: the concept of (cultural) distance. Here we focus on its effect on entry mode choice, one of the most researched fields in international business strategy. Our findings might, however, be equally be relevant for the field of International Business as a whole. Our analysis is based on a review of 92 prior studies on entry mode choice, as well as an empirical investigation in over 800 subsidiaries of MNCs, covering nine host and fifteen home countries across the world.
We conclude that the explanatory power of distance is highly limited once home and host country context are accounted for, and that any significant effects of cultural distance on entry mode choice might simply be caused by inadequate sampling. Entry mode studies in particular, and International Business research in general, would do well to reconsider its fascination with distance measures, and instead, focus first and foremost on differences in home and host country context. We argue that serious engagement with deep contextualization is necessary in International Business research to pose new and relevant questions and develop new and innovative theories that explain empirical phenomena
Why and how does shared language affect subsidiary knowledge inflows? A social identity perspective
We draw on social identity theory to conceptualize a moderated mediation model that examines the relationship between shared language among subsidiary and HQ managers, and subsidiaries’ knowledge inflows from HQ.
Specifically, we study (1) whether this relationship is mediated by the extent to which subsidiary managers share HQ goals and vision, and the extent to which HR decisions are centralized; and (2) whether subsidiary type moderates these mediated relationships. Building on a sample of 817 subsidiaries in nine countries/regions, we find support for our model. Implications for research on HQ-subsidiary knowledge flows, social identity theory and international HRM are discussed
Using Google Scholar Institutional Level Data to Evaluate the Quality of University Research
In recent years, the extent of formal research evaluation, at all levels from the individual to the multiversity has increased dramatically. At the institutional level, there are world university rankings based on an ad hoc combination of different indicators. There are also national exercises, such as those in the UK and Australia that evaluate research outputs and environment through peer review panels. These are extremely costly and time consuming. This paper evaluates the possibility of using Google Scholar (GS) institutional level data to evaluate university research in a relatively automatic way. Several citation-based metrics are collected from GS for all 130 UK universities. These are used to evaluate performance and produce university rankings which are then compared with various rankings based on the 2014 UK Research Excellence Framework (REF). The rankings are shown to be credible and to avoid some of the obvious problems of the REF ranking, as well as being highly efficient and cost effective. We also investigate the possibility of normalizing the results for the university subject mix since science subjects generally produce significantly more citations than social science or humanities
Google Scholar Metrics evolution: an analysis according to languages
The final publication is available at Springer via http://dx.doi.org/10.1007/s11192-013-1164-8In November 2012 the Google Scholar Metrics (GSM) journal rankings were updated, making it possible to compare bibliometric indicators in the ten languages indexed—and their stability—with the April 2012 version. The h-index and h-5 median of 1,000 journals were analysed, comparing their averages, maximum and minimum values and the correlation coefficient within rankings. The bibliometric figures grew significantly. In just seven and a half months the h-index of the journals increased by 15 % and the median h-index by 17 %. This growth was observed for all the bibliometric indicators analysed and for practically every journal. However, we found significant differences in growth rates depending on the language in which the journal is published. Moreover, the journal rankings seem to be stable between April and November, reinforcing the credibility of the data held by Google Scholar and the reliability of the GSM journal rankings, despite the uncontrolled growth of Google Scholar. Based on the findings of this study we suggest, firstly, that Google should upgrade its rankings at least semi-annually and, secondly, that the results should be displayed in each ranking proportionally to the number of journals indexed by language.Orduña Malea, E.; Delgado LĂłpez-CĂłzar, E. (2014). Google Scholar Metrics evolution: an analysis according to languages. Scientometrics. 98(3):2353-2367. doi:10.1007/s11192-013-1164-8S23532367983Aguillo, & Isidro, F. (2012). Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics, 91(2), 343–351.Brewington, B. E., & Cybenko, G. (2000). How dynamic is the Web? Computer Networks, 33(1–6), 257–276.Chen, X. (2010). Google Scholar’s dramatic coverage improvement five years after debut. Serials Review, 36(4), 221–226.Cho, Y. & Garcia-Molina, H. (2000). The evolution of the web and implications for an incremental crawler. Proceedings of the 26th International Conference on very large data bases, 200–209.Costas, R., & Bordons, M. (2007). The h-index: advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1(3), 193–203.de Winter, J. C. F., Zadpoor, A. A., & Dodou, D. (2013). The expansion of Google Scholar versus Web of Science: a longitudinal study. Scientometrics. doi: 10.1007/s11192-013-1089-2 .Delgado LĂłpez-CĂłzar, E., & Cabezas-Clavijo, A. (2012). Google Scholar Metrics: an unreliable tool for assessing scientific journals. El profesional de la informaciĂłn, 21(4), 419–427.Delgado LĂłpez-CĂłzar, E., & Cabezas-Clavijo, A. (2013). Ranking journals: could Google Scholar metrics be an alternative to journal citation reports and Scimago journal ranks. Learned publishing, 26(2), 101–114.Fetterly, D., Manasse, M., Najork, M. & Wiener, J. (2003). A large scale study of the evolution of web pages. Proceedings of the Twelfth International Conference on World Wide Web, 669–678.Harzing, A.-W. (2013). A preliminary test of Google Scholar as a source for citation data: a longitudinal study of Nobel prize winners. Scientometrics, 94(3), 1057–1075.JacsĂł, P. (2012). Google Scholar Metrics for Publications—The software and content feature of a new open access bibliometric service. Online Information Review, 36(4), 604–619.Koehler, W. (2002). Web page change and persistence-4-year longitudinal web study. Journal of the American Society for Information Science and Technology, 53(2), 162–171.Koehler, W (2004). A longitudinal study of Web pages continued a consideration of document persistence. Information Research, 9(2). http://informationr.net/ir/9-2/paper174.html . Accessed 1 Sep 2013.Kousha, K., & Thelwall, M. (2007). Google Scholar Citations and Google Web/URL citations: a multidiscipline exploratory analysis. Journal of the American Society for Information Science and Technology, 58(7), 1055–1065.Leydesdorff, L. (2012). World shares of publications of the USA, EU-27, and China compared and predicted using the new Web of Science interface versus Scopus. El profesional de la informaciĂłn, 21(1), 43–49.Neuhaus, C., Neuhaus, E., Asher, A., & Wrede, C. (2006). The depth and breadth of Google Scholar: An empirical study. Libraries and the Academy, 6(2), 127–141.Orduña-Malea, E., Serrano-Cobos, J., & Lloret-Romero, N. (2009). Las universidades pĂşblicas españolas en Google Scholar: presencia y evoluciĂłn de su publicaciĂłn acadĂ©mica web. El profesional de la informaciĂłn, 18(5), 493–500.Orduña-Malea, E., Serrano-Cobos, J., Ontalba-RuipĂ©rez, J.-A., & Lloret-Romero, N. (2010). Presencia y visibilidad web de las universidades pĂşblicas españolas. Revista española de documentaciĂłn cientĂfica, 33(2), 246–278.Ortega, J. L., Aguillo, I. F., & Prieto, J. A. (2006). Longitudinal study of contents and elements in the scientific Web environment. Journal of Information Science, 32(4), 344–351.Payne, N., & Thelwall, M. (2007). A longitudinal study of academic webs: growth and stabilization. Scientometrics, 71(3), 523–539
COVID-19 publications: Database coverage, citations, readers, tweets, news, Facebook walls, Reddit posts
© 2020 The Authors. Published by MIT Press. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1162/qss_a_00066The COVID-19 pandemic requires a fast response from researchers to help address biological,
medical and public health issues to minimize its impact. In this rapidly evolving context,
scholars, professionals and the public may need to quickly identify important new studies. In
response, this paper assesses the coverage of scholarly databases and impact indicators
during 21 March to 18 April 2020. The rapidly increasing volume of research, is particularly
accessible through Dimensions, and less through Scopus, the Web of Science, and PubMed.
Google Scholar’s results included many false matches. A few COVID-19 papers from the
21,395 in Dimensions were already highly cited, with substantial news and social media
attention. For this topic, in contrast to previous studies, there seems to be a high degree of
convergence between articles shared in the social web and citation counts, at least in the
short term. In particular, articles that are extensively tweeted on the day first indexed are
likely to be highly read and relatively highly cited three weeks later. Researchers needing wide
scope literature searches (rather than health focused PubMed or medRxiv searches) should
start with Dimensions (or Google Scholar) and can use tweet and Mendeley reader counts as
indicators of likely importance
- …