6 research outputs found

    Rejoinder to “Reconsidering Counting Articles in Ranked Venues (CARV) as the Appropriate Evaluation Criteria for the Advancement of Democratic Discourse in the IS Field”

    Get PDF
    In their article, Cuellar, Truex, and Takeda (2019) criticize the “process for evaluating scholarly output, “counting articles in ranked venues’ (CARV)” (p. 188). In their view, CARV limits the open exchange of ideas and, thereby, democratic discourse, which leads to unwanted performative effects and, ultimately, inhibits the growth of the information systems (IS) field. They propose the scholarly capital model (SCM) as a preferable mechanism that evaluators should employ to assess scholarly capital instead of scholarly output. In this rejoinder, we argue that CARV does not claim to measure output quality; it neither limits quality in the IS field nor the IS field’s growth, and mingling the effects of CARV with debates on quality or growth could be misleading. Replacing CARV would not change the game, only its rules. We posit that we all entered academia voluntarily knowing its rules and argue that colleagues facing P&T committees should recognize and focus on the specific (CARV-based or not) criteria of their institutions’ committees. While we expect that a new method will replace CARV in the not so distant future, we are convinced that, until then, a CARV-based environment offers ample opportunity to advance quality and growth of the IS field

    A Rejoinder to “Reconsidering Counting Articles in Ranked Venues (CARV) as the Appropriate Evaluation Criteria for the Advancement of Democratic Discourse in the IS Field”

    Get PDF
    We agree with Cuellar et al.’s (2019) main premise that, for a research field to advance, scholars must be able to openly exchange ideas. For such an open exchange to exist, the contexts and methods that evaluate scholarly output must encourage this interchange. Cuellar at al. argue that the current process for evaluating scholarly output (which they call “counting articles in ranked venues” (CARV)) creates pressures that result in a distorted discourse that inhibits the field’s growth. In this article, we extend the conversation by adding clarifications, further insights, raising questions, and providing different solutions. Specifically, for the sake of logical clarity of the ensuing debate, we separate individual research contribution (IRC) and field research discourse (FRD). We explain and clarify the pairwise relationships between CARV and IRC and between CARV and FRD in order to discuss the role of CARV or lack thereof in assessing research contribution and discourse. We posit that CARV may assess IRC but not FRD and offer insights into how to improve IRC and FRD. We provide anecdotal evidence that a CARV-free world could exist but that it would entail high agency cost. We also offer an alternative solution that could supplement or substitute CARV. We conclude that any attempt to measure IRC without adequately incorporating attributes of FRD habitat is destined to be flawed

    A Response to “Reconsidering Counting Articles in Ranked Venues (CARV) as the Appropriate Evaluation Criteria for the Advancement of Democratic Discourse in the IS Field”

    Get PDF
    Cuellar, Truex, and Tajeda (2019) take the position that counting the number of articles published in ranked venues is an inappropriate method of evaluating the scholarly performance of faculty. They base their contention on a number of unfounded assertions and unsupported arguments, which the author details and analyzes. They propose an alternative evaluation criterion, which they call the “scholarly capital model”. In this rejoinder, I critique this model and find it wanting

    Debate Section Editorial Note: Information Systems Research: Thinking Outside the Basket and Beyond the Journal

    Get PDF
    This Communications of the Association for Information Systems debate marks the seventh debate since the debate section’s inception in 2014. Like four of its predecessors, it deals with the way and where we publish and implicitly with the relationship between publication outlets and how we evaluate individual scholarly output for hiring, tenure, and promotion purposes

    Looking Beyond the Pointing Finger: Ensuring the Success of the Scholarly Capital Model in the Contemporary Academic Environment

    Get PDF
    The currently predominant method of counting articles in ranked venues (CARV) to assess one’s academic achievements has had a deleterious impact on the state of the IS field, which points to a need for a paradigm shift. In this rejoinder to Cuellar, Truex, and Takeda’s (2019) article, I extend the scholarly capital model that they propose and comment on its applicability, adoption, and potential misuse. I propose that the model would benefit if it included a new component – practical capital, which comprises three dimensions: knowledge outreach (a scholar’s direct contribution to professional forums), knowledge impact (a scholar’s indirect contribution to professional forums), and community engagement (a scholar’s connections with the non-academic sector). I strongly recommend that the Association for Information Systems accept a formal stewardship role and facilitate further development, testing, and promotion of the scholarly capital model

    Invited viewpoint: how well does the Information Systems discipline fare in the Financial Times’ top 50 Journal list?

    Get PDF
    This paper investigates the performance of the Information Systems (IS) discipline as reflected in the scholarly impact of the three IS journals that are included in the Financial Times’ top 50 journals (FT50), the four IS journals in the top tiers of the Chartered Association of Business Schools’ Academic Journal Guide (CABS AJG), and the eight journals that comprise the Association for Information Systems (AIS) Senior Scholars' Basket of Journals (AIS Basket). Journal lists, when framed as a form of ‘strategic signaling’, are used to by institutions to communicate values and priorities to scholars. Through strategic signaling, journal lists are performative and have the potential to shape and constrain research activity. Given the strategic and performative role of journal lists, it is important that the journals that constitute those lists have substantial impact. To measure the scholarly impact of journals we propose a new measure, the HMJ index, which comprises an equally-weighted combination of journal H-index, median citations per article, and Journal Impact Factor (JIF). Using the HMJ index, the results show that all eight AIS Basket journals are performing at a level that is commensurate with the other journals that make up the FT50. The results further show substantial differences between the FT50 journals, such as the number of articles published per annum. Implications for IS scholars, IS groups, and the IS discipline are identified, together with recommendations for action
    corecore