6 research outputs found
Rejoinder to âReconsidering Counting Articles in Ranked Venues (CARV) as the Appropriate Evaluation Criteria for the Advancement of Democratic Discourse in the IS Fieldâ
In their article, Cuellar, Truex, and Takeda (2019) criticize the âprocess for evaluating scholarly output, âcounting articles in ranked venuesâ (CARV)â (p. 188). In their view, CARV limits the open exchange of ideas and, thereby, democratic discourse, which leads to unwanted performative effects and, ultimately, inhibits the growth of the information systems (IS) field. They propose the scholarly capital model (SCM) as a preferable mechanism that evaluators should employ to assess scholarly capital instead of scholarly output. In this rejoinder, we argue that CARV does not claim to measure output quality; it neither limits quality in the IS field nor the IS fieldâs growth, and mingling the effects of CARV with debates on quality or growth could be misleading. Replacing CARV would not change the game, only its rules. We posit that we all entered academia voluntarily knowing its rules and argue that colleagues facing P&T committees should recognize and focus on the specific (CARV-based or not) criteria of their institutionsâ committees. While we expect that a new method will replace CARV in the not so distant future, we are convinced that, until then, a CARV-based environment offers ample opportunity to advance quality and growth of the IS field
A Rejoinder to âReconsidering Counting Articles in Ranked Venues (CARV) as the Appropriate Evaluation Criteria for the Advancement of Democratic Discourse in the IS Fieldâ
We agree with Cuellar et al.âs (2019) main premise that, for a research field to advance, scholars must be able to openly exchange ideas. For such an open exchange to exist, the contexts and methods that evaluate scholarly output must encourage this interchange. Cuellar at al. argue that the current process for evaluating scholarly output (which they call âcounting articles in ranked venuesâ (CARV)) creates pressures that result in a distorted discourse that inhibits the fieldâs growth. In this article, we extend the conversation by adding clarifications, further insights, raising questions, and providing different solutions. Specifically, for the sake of logical clarity of the ensuing debate, we separate individual research contribution (IRC) and field research discourse (FRD). We explain and clarify the pairwise relationships between CARV and IRC and between CARV and FRD in order to discuss the role of CARV or lack thereof in assessing research contribution and discourse. We posit that CARV may assess IRC but not FRD and offer insights into how to improve IRC and FRD. We provide anecdotal evidence that a CARV-free world could exist but that it would entail high agency cost. We also offer an alternative solution that could supplement or substitute CARV. We conclude that any attempt to measure IRC without adequately incorporating attributes of FRD habitat is destined to be flawed
A Response to âReconsidering Counting Articles in Ranked Venues (CARV) as the Appropriate Evaluation Criteria for the Advancement of Democratic Discourse in the IS Fieldâ
Cuellar, Truex, and Tajeda (2019) take the position that counting the number of articles published in ranked venues is an inappropriate method of evaluating the scholarly performance of faculty. They base their contention on a number of unfounded assertions and unsupported arguments, which the author details and analyzes. They propose an alternative evaluation criterion, which they call the âscholarly capital modelâ. In this rejoinder, I critique this model and find it wanting
Debate Section Editorial Note: Information Systems Research: Thinking Outside the Basket and Beyond the Journal
This Communications of the Association for Information Systems debate marks the seventh debate since the debate sectionâs inception in 2014. Like four of its predecessors, it deals with the way and where we publish and implicitly with the relationship between publication outlets and how we evaluate individual scholarly output for hiring, tenure, and promotion purposes
Looking Beyond the Pointing Finger: Ensuring the Success of the Scholarly Capital Model in the Contemporary Academic Environment
The currently predominant method of counting articles in ranked venues (CARV) to assess oneâs academic achievements has had a deleterious impact on the state of the IS field, which points to a need for a paradigm shift. In this rejoinder to Cuellar, Truex, and Takedaâs (2019) article, I extend the scholarly capital model that they propose and comment on its applicability, adoption, and potential misuse. I propose that the model would benefit if it included a new component â practical capital, which comprises three dimensions: knowledge outreach (a scholarâs direct contribution to professional forums), knowledge impact (a scholarâs indirect contribution to professional forums), and community engagement (a scholarâs connections with the non-academic sector). I strongly recommend that the Association for Information Systems accept a formal stewardship role and facilitate further development, testing, and promotion of the scholarly capital model
Invited viewpoint: how well does the Information Systems discipline fare in the Financial Timesâ top 50 Journal list?
This paper investigates the performance of the Information Systems (IS) discipline as reflected in the scholarly impact of the three IS journals that are included in the Financial Timesâ top 50 journals (FT50), the four IS journals in the top tiers of the Chartered Association of Business Schoolsâ Academic Journal Guide (CABS AJG), and the eight journals that comprise the Association for Information Systems (AIS) Senior Scholars' Basket of Journals (AIS Basket). Journal lists, when framed as a form of âstrategic signalingâ, are used to by institutions to communicate values and priorities to scholars. Through strategic signaling, journal lists are performative and have the potential to shape and constrain research activity. Given the strategic and performative role of journal lists, it is important that the journals that constitute those lists have substantial impact. To measure the scholarly impact of journals we propose a new measure, the HMJ index, which comprises an equally-weighted combination of journal H-index, median citations per article, and Journal Impact Factor (JIF). Using the HMJ index, the results show that all eight AIS Basket journals are performing at a level that is commensurate with the other journals that make up the FT50. The results further show substantial differences between the FT50 journals, such as the number of articles published per annum. Implications for IS scholars, IS groups, and the IS discipline are identified, together with recommendations for action