1,607 research outputs found
MNC Staffing policies for the managing director position in foreign subsidiaries : the results of an innovative research method
This research note draws the attention to the harmful consequence of a serious lack of empirical research in the field of International Human Resource Management: myth-building on the basis of one or two publications. The apparent myth of high expatriate failure rates is shortly discussed. To prevent another myth from appearing, this time in the field of staffing policies, this research note provides an empirical test of the framework proposed by Meredith Downes (1996) for making decisions about staffing foreign subsidiaries. The propositions set forward by Downes are tested using a database of nearly 1800 subsidiaries located in twenty-two different countries. Headquarters of these subsidiaries are located in nine different countries and operate in eight different industries. Although the variables suggested by Downes have a fair explanatory power, some of the specific propositions had to be rejected.management and organization theory ;
Microsoft Academic is one year old: the Phoenix is ready to leave the nest
We investigate the coverage of Microsoft Academic (MA) just over a year after its re-launch. First, we provide a detailed comparison for the first author’s record across the four major data sources: Google Scholar (GS), MA, Scopus and Web of Science (WoS) and show that for the most important academic publications, journal articles and books, GS and MA display very similar publication and citation coverage, leaving both Scopus and WoS far behind, especially in terms of citation counts.
A second, large scale, comparison for 145 academics across the five main disciplinary areas confirms that citation coverage for GS and MA is quite similar for four of the five disciplines. MA citation coverage in the Humanities is still substantially lower than GS coverage, reflecting MA’s lower coverage of non-journal publications. However, we shouldn’t forget that MA coverage for the Humanities still dwarfs coverage for this discipline in Scopus and WoS.
It would be desirable for other researchers to verify our findings with different samples before drawing a definitive conclusion about MA coverage. However, based on our current findings we suggest that, only one year after its re-launch, MA is rapidly become the data source of choice; it appears to be combining the comprehensive coverage across disciplines, displayed by GS, with the more structured approach to data presentation, typical of Scopus and WoS. The phoenix seems to be ready to leave the nest, all set to start its life into an adulthood of research evaluation
Health warning: might contain multiple personalities - the problem of homonyms in Thomson Reuters Essential Science Indicators
Author name ambiguity is a crucial problem in any type of bibliometric analysis. It arises when several authors share the same name, but also when one author expresses their name in different ways. This article focuses on the former, also called the “namesake” problem. In particular, we assess the extent to which this compromises the Thomson Reuters Essential Science Indicators (ESI) ranking of the top 1% most cited authors worldwide. We show that three demographic characteristics that should be unrelated to research productivity – name origin, uniqueness of one’s family name, and the number of initials used in publishing – in fact have a very strong influence on it.
In contrast to what could be expected from Web of Science publication data, researchers with Asian names – and in particular Chinese and Korean names – appear to be far more productive than researchers with Western names. Furthermore, for any country, academics with common names and fewer initials also appear to be more productive than their more uniquely named counterparts. However, this appearance of high productivity is caused purely by the fact that these “academic superstars” are in fact composites of many individual academics with the same name. We thus argue that it is high time that Thomson Reuters starts taking name disambiguation in general, and non-Anglophone names in particular, more seriously
Why replication studies are essential: learning from failure and success
Van Witteloostuijn’s (2016) commentary “What happened to Popperian Falsification?” is an excellent summary of the many problems that plague research in the (Social) Sciences in general and (International) Business & Management in particular. As van Witteloostuijn (2016:pp] admits his “[...] diagnosis is anything but new – quite the contrary”, nor is it applicable only to the Social Sciences. When preparing this note, I was reminded of Cargo Cult Science, a 1974 Caltech commencement address by Physicist Richard Feynman (Feynman, 1974), which – more than four decades ago – makes many of the same points, including the pervasive problem of a lack of replication studies, which will be the topic of this short rejoinder.
Conducting replication studies is more difficult in International Business (IB) than it is in many other disciplines. For instance in Psychology – a discipline that favours experimental research – one might be able to replicate a particular study within weeks or, in some cases, even days. However, in IB data collection is typically very time-consuming and fraught with many problems not encountered in purely domestic research (for a summary see Harzing, Reiche & Pudelko, 2013). Moreover, most journals in our field only publish articles with novel research findings and a strong theoretical contribution, and are thus not open to replication studies. To date, most studies in IB are therefore unique and are never replicated. This is regrettable, because even though difficult, replication is even more essential in IB than it is in domestic studies, because differences in cultural and institutional environments might limit generalization from studies conducted in a single home or host country.
Somehow though, pleas for replication studies – however well articulated and however often repeated – seem to be falling on deaf ears. Academics are only human, and many humans learn best from personal stories and examples, especially if they evoke vivid emotions or associations. Hence, in this note, instead of providing yet another essayistic plea for replication, I will attempt to argue “by example”. I present two short case studies from my own research: one in which the lack of replication resulted in the creation of myths, and another in which judicious replication strengthened arguments for a new – less biased – measure of research performance. Finally, I will provide a recommendation on how to move forward that can be implemented immediately without the need for a complete overhaul of our current system of research dissemination
Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison
This article aims to provide a systematic and comprehensive comparison of the coverage of the three major bibliometric databases: Google Scholar, Scopus and the Web of Science. Based on a sample of 146 senior academics in five broad disciplinary areas, we therefore provide both a longitudinal and a cross-disciplinary comparison of the three databases.
Our longitudinal comparison of eight data points between 2013 and 2015 shows a consistent and reasonably stable quarterly growth for both publications and citations across the three databases. This suggests that all three databases provide sufficient stability of coverage to be used for more detailed cross-disciplinary comparisons.
Our cross-disciplinary comparison of the three databases includes four key research metrics (publications, citations, h-index, and hI,annual, an annualised individual h-index) and five major disciplines (Humanities, Social Sciences, Engineering, Sciences and Life Sciences). We show that both the data source and the specific metrics used change the conclusions that can be drawn from cross-disciplinary comparisons
What, who or where? Rejoinder to "identifying research topic development in Business and Management education research using legitimation code theory"
Arbaugh, Fornaciari and Hwang (2016) use citation analysis – with Google Scholar as their source of citation data – to track the development of Business and Management Education research by studying the field’s 100 most highly cited articles. In their article, the authors distinguish several factors that might impact on an article’s level of citations: the topic it addresses, the profile of the author(s) who wrote it and the prominence of the journal that the article is published in.
Although these three factors might seem rather intuitive, and the authors certainly are not the first to identify them, there is a surprising dearth of studies in the bibliometrics literature that attempt to disentangle the relative impact of these factors on citation outcomes. Yet, this question is of considerable relevance in the context of academic evaluation. If citation levels of individual articles are determined more by what is published (topic) and who publishes it (author) rather than by where it is published (journal), this would provide clear evidence that the frequently used practice of employing the ISI journal impact factor to evaluate individual articles or authors is inappropriate
A longitudinal study of Google Scholar coverage between 2012 and 2013
Harzing (2013) showed that between April 2011 and January 2012, Google Scholar has very significantly expanded its coverage in Chemistry and Physics, with a more modest expansion for Medicine and a natural increase in citations only for Economics. However, we do not yet know whether this expansion of coverage was temporary or permanent, nor whether a further expansion of coverage has occurred. It is these questions we set out to respond in this research note.
We use a sample of 20 Nobelists in Chemistry, Economics, Medicine and Physics and track their h-index, g-index and total citations in Google Scholar on a monthly basis. Our data suggest that - after a period of significant expansion for Chemistry and Physics - Google Scholar coverage is now increasing at a stable rate. Google Scholar also appears to provide comprehensive coverage for the four disciplines we studied. The increased stability and coverage might make Google Scholar much more suitable for research evaluation and bibliometric research purposes than it has been in the past
Microsoft Academic: A multidisciplinary comparison of citation counts with Scopus and Mendeley for 29 journals
Microsoft Academic is a free citation index that allows large scale data collection. This combination makes it useful for scientometric research. Previous studies have found that its citation counts tend to be slightly larger than those of Scopus but smaller than Google Scholar, with disciplinary variations. This study reports the largest and most systematic analysis so far, of 172,752 articles in 29 large journals chosen from different specialisms. From Scopus citation counts, Microsoft Academic citation counts and Mendeley reader counts for articles published 2007-2017, Microsoft Academic found a slightly more (6%) citations than Scopus overall and especially for the current year (51%). It found fewer citations than Mendeley readers overall (59%), and only 7% as many for the current year. Differences between journals were probably due to field preprint sharing cultures or journal policies rather than broad disciplinary differences
Microsoft Academic (Search): a Phoenix arisen from the ashes?
In comparison to the many dozens of articles reviewing and comparing (coverage of) the Web of Science, Scopus, and Google Scholar, the bibliometric research community has paid very little attention to Microsoft Academic Search (MAS). An important reason for the bibliometric community’s lack of enthusiasm might have been that MAS coverage was fairly limited, and that almost no new coverage had been added since 2012. Recently, however, Microsoft introduced a new service – Microsoft Academic – built on content that search engine Bing crawls from the web.
This article assesses Microsoft Academic coverage through a detailed comparison of the publication and citation record of a single academic for each the four main citation databases: Google Scholar, Microsoft Academic, the Web of Science, and Scopus. Overall, this first small-scale case study suggests that the new incarnation of Microsoft Academic presents us with an excellent alternative for citation analysis. If our findings can be confirmed by larger-scale studies, Microsoft Academic might well turn out to combine the advantages of broader coverage, as displayed by Google Scholar, with the advantage of a more structured approach to data presentation, typical of Scopus and the Web of Science. If so, the new Microsoft Academic service would truly be a Phoenix arisen from the ashes
Disseminating knowledge: from potential to reality – new open-access journals collide with convention
Scholars beware! For years, researchers have lamented the long lag times endemic in conventional academic publishing, where even the highest quality papers have often taken more than two years from initial submission to publication. Luckily, advances in digital technologies and the advent of online, open-access (OA) journals are rendering such delays obsolete. Society can now directly benefit from published research within months (and sometimes weeks) of a study being completed.
Unfortunately however, open-access, online technologies are interacting with new revenue-generating business models and historic assessment systems, leading to the rise of predatory open-access (POA) journals that prioritize profit over the integrity of academic scholarship. Such interaction is leading to disruptive distortions that are systematically undermining academia’s ability to disseminate the highest quality scholarship and to benefit from free, timely access
- …
