720 research outputs found

    Health warning: might contain multiple personalities - the problem of homonyms in Thomson Reuters Essential Science Indicators

    Get PDF
    Author name ambiguity is a crucial problem in any type of bibliometric analysis. It arises when several authors share the same name, but also when one author expresses their name in different ways. This article focuses on the former, also called the “namesake” problem. In particular, we assess the extent to which this compromises the Thomson Reuters Essential Science Indicators (ESI) ranking of the top 1% most cited authors worldwide. We show that three demographic characteristics that should be unrelated to research productivity – name origin, uniqueness of one’s family name, and the number of initials used in publishing – in fact have a very strong influence on it. In contrast to what could be expected from Web of Science publication data, researchers with Asian names – and in particular Chinese and Korean names – appear to be far more productive than researchers with Western names. Furthermore, for any country, academics with common names and fewer initials also appear to be more productive than their more uniquely named counterparts. However, this appearance of high productivity is caused purely by the fact that these “academic superstars” are in fact composites of many individual academics with the same name. We thus argue that it is high time that Thomson Reuters starts taking name disambiguation in general, and non-Anglophone names in particular, more seriously

    Microsoft Academic (Search): a Phoenix arisen from the ashes?

    Get PDF
    In comparison to the many dozens of articles reviewing and comparing (coverage of) the Web of Science, Scopus, and Google Scholar, the bibliometric research community has paid very little attention to Microsoft Academic Search (MAS). An important reason for the bibliometric community’s lack of enthusiasm might have been that MAS coverage was fairly limited, and that almost no new coverage had been added since 2012. Recently, however, Microsoft introduced a new service – Microsoft Academic – built on content that search engine Bing crawls from the web. This article assesses Microsoft Academic coverage through a detailed comparison of the publication and citation record of a single academic for each the four main citation databases: Google Scholar, Microsoft Academic, the Web of Science, and Scopus. Overall, this first small-scale case study suggests that the new incarnation of Microsoft Academic presents us with an excellent alternative for citation analysis. If our findings can be confirmed by larger-scale studies, Microsoft Academic might well turn out to combine the advantages of broader coverage, as displayed by Google Scholar, with the advantage of a more structured approach to data presentation, typical of Scopus and the Web of Science. If so, the new Microsoft Academic service would truly be a Phoenix arisen from the ashes

    Why replication studies are essential: learning from failure and success

    Get PDF
    Van Witteloostuijn’s (2016) commentary “What happened to Popperian Falsification?” is an excellent summary of the many problems that plague research in the (Social) Sciences in general and (International) Business & Management in particular. As van Witteloostuijn (2016:pp] admits his “[...] diagnosis is anything but new – quite the contrary”, nor is it applicable only to the Social Sciences. When preparing this note, I was reminded of Cargo Cult Science, a 1974 Caltech commencement address by Physicist Richard Feynman (Feynman, 1974), which – more than four decades ago – makes many of the same points, including the pervasive problem of a lack of replication studies, which will be the topic of this short rejoinder. Conducting replication studies is more difficult in International Business (IB) than it is in many other disciplines. For instance in Psychology – a discipline that favours experimental research – one might be able to replicate a particular study within weeks or, in some cases, even days. However, in IB data collection is typically very time-consuming and fraught with many problems not encountered in purely domestic research (for a summary see Harzing, Reiche & Pudelko, 2013). Moreover, most journals in our field only publish articles with novel research findings and a strong theoretical contribution, and are thus not open to replication studies. To date, most studies in IB are therefore unique and are never replicated. This is regrettable, because even though difficult, replication is even more essential in IB than it is in domestic studies, because differences in cultural and institutional environments might limit generalization from studies conducted in a single home or host country. Somehow though, pleas for replication studies – however well articulated and however often repeated – seem to be falling on deaf ears. Academics are only human, and many humans learn best from personal stories and examples, especially if they evoke vivid emotions or associations. Hence, in this note, instead of providing yet another essayistic plea for replication, I will attempt to argue “by example”. I present two short case studies from my own research: one in which the lack of replication resulted in the creation of myths, and another in which judicious replication strengthened arguments for a new – less biased – measure of research performance. Finally, I will provide a recommendation on how to move forward that can be implemented immediately without the need for a complete overhaul of our current system of research dissemination

    What, who or where? Rejoinder to "identifying research topic development in Business and Management education research using legitimation code theory"

    Get PDF
    Arbaugh, Fornaciari and Hwang (2016) use citation analysis – with Google Scholar as their source of citation data – to track the development of Business and Management Education research by studying the field’s 100 most highly cited articles. In their article, the authors distinguish several factors that might impact on an article’s level of citations: the topic it addresses, the profile of the author(s) who wrote it and the prominence of the journal that the article is published in. Although these three factors might seem rather intuitive, and the authors certainly are not the first to identify them, there is a surprising dearth of studies in the bibliometrics literature that attempt to disentangle the relative impact of these factors on citation outcomes. Yet, this question is of considerable relevance in the context of academic evaluation. If citation levels of individual articles are determined more by what is published (topic) and who publishes it (author) rather than by where it is published (journal), this would provide clear evidence that the frequently used practice of employing the ISI journal impact factor to evaluate individual articles or authors is inappropriate

    Two new kids on the block: How do Crossref and Dimensions compare with Google Scholar, Microsoft Academic, Scopus and the Web of Science?

    Get PDF
    In the last three years, several new (free) sources for academic publication and citation data have joined the now well-established Google Scholar, complementing the two traditional commercial data sources: Scopus and the Web of Science. The most important of these new data sources are Microsoft Academic (2016), Crossref (2017) and Dimensions (2018). Whereas Microsoft Academic has received some attention from the bibliometric community, there are as yet very few studies that have investigated the coverage of Crossref or Dimensions. To address this gap, this brief letter assesses Crossref and Dimensions coverage in comparison to Google Scholar, Microsoft Academic, Scopus and the Web of Science through a detailed investigation of the full publication and citation record of a single academic, as well as six top journals in Business & Economics. Overall, this first small-scale study suggests that, when compared to Scopus and the Web of Science, Crossref and Dimensions have a similar or better coverage for both publications and citations, but a substantively lower coverage than Google Scholar and Microsoft Academic. If our findings can be confirmed by larger-scale studies, Crossref and Dimensions might serve as good alternatives to Scopus and the Web of Science for both literature reviews and citation analysis. However, Google Scholar and Microsoft Academic maintain their position as the most comprehensive free sources for publication and citation data

    A longitudinal study of Google Scholar coverage between 2012 and 2013

    Get PDF
    Harzing (2013) showed that between April 2011 and January 2012, Google Scholar has very significantly expanded its coverage in Chemistry and Physics, with a more modest expansion for Medicine and a natural increase in citations only for Economics. However, we do not yet know whether this expansion of coverage was temporary or permanent, nor whether a further expansion of coverage has occurred. It is these questions we set out to respond in this research note. We use a sample of 20 Nobelists in Chemistry, Economics, Medicine and Physics and track their h-index, g-index and total citations in Google Scholar on a monthly basis. Our data suggest that - after a period of significant expansion for Chemistry and Physics - Google Scholar coverage is now increasing at a stable rate. Google Scholar also appears to provide comprehensive coverage for the four disciplines we studied. The increased stability and coverage might make Google Scholar much more suitable for research evaluation and bibliometric research purposes than it has been in the past

    The competitive advantage of nations: an application to academia

    Get PDF
    Within the field of bibliometrics, there is sustained interest in how nations “compete” in terms of academic disciplines, and what determinants explain why countries may have a specific advantage in one discipline over another. However, this literature has not, to date, presented a comprehensive structured model that could be used in the interpretation of a country’s research profile and academic output. In this paper, we use frameworks from international business and economics to present such a model. Our study makes four major contributions. First, we include a very wide range of countries and disciplines, explicitly including the Social Sciences, which unfortunately are excluded in most bibliometrics studies. Second, we apply theories of revealed comparative advantage and the competitive advantage of nations to academic disciplines. Third, we cluster our 34 countries into five different groups that have distinct combinations of revealed comparative advantage in five major disciplines. Finally, based on our empirical work and prior literature, we present an academic diamond that details factors likely to explain a country’s research profile and competitiveness in certain disciplines

    The double-edged sword of ethnic similarity for expatriates

    Get PDF
    Identifying employees to represent headquarters (HQ) effectively in overseas units is a management challenge faced by all multinational corporations (MNCs). To date, management of this type of expatriate employees has accorded a central role to culture, such as understanding cultural differences, facilitating cultural adaptation and adjustment, and cultivating cultural intelligence. Although culture is a critical factor in ex-plaining expatriates’ experiences, identity offers an alternative angle to reveal the challenges that occur when expatriates interact with host country employees. In this article, we introduce ethnically similar expatriates – a sub-category of expatriates who share an ethnicity with host country employees – to showcase the role of identity, especially the interpersonal dynamics associated with ethnic similarity

    Proof over promise: towards a more inclusive ranking of Dutch academics in Economics & Business

    Get PDF
    The Dutch Economics top-40, based on publications in ISI listed journals, is - to the best of our knowledge - the oldest ranking of individual academics in Economics and is well accepted in the Dutch academic community. However, this ranking is based on publication volume, rather than on the actual impact of the publications in question. This paper therefore uses two relatively new metrics, the citations per author per year (CAY) metric and the individual annual h-index (hIa) to provide two alternative, citation-based, rankings of Dutch academics in Economics & Business. As a data source, we use Google Scholar instead of ISI to provide a more comprehensive measure of impact, including citations to and from publications in non-ISI listed journals, books, working and conference papers. The resulting rankings are shown to be substantially different from the original ranking based on publications. Just like other research metrics, the CAY or hIa-index should never be used as the sole criterion to evaluate academics. However, we do argue that the hIa-index and the related citations per author per year metric provide an important additional perspective over and above a ranking based on publications in high impact journals alone. Citation-based rankings are also shown to inject a higher level of diversity in terms of age, gender, discipline and academic affiliation and thus appear to be more inclusive of a wider range of scholarship
    • …
    corecore