36 research outputs found

    Culture change in academia: Making sharing the new norm

    Get PDF
    Keynote address by neuroscientist and Open Access advocate Erin Mckiernan followed by a panel discussion featuring Pitt faculty Brian Beaton, Gordon Mitchell, Lara Putnam, and Jackie Smith. The event was sponsored by the University Library System, University of Pittsburgh in celebration of the 7th International open Access Week, October 20-26, 2014. From the event announcement: "Join a lively discussion with Erin McKiernan, an early career researcher in experimental and computational neuroscience and a leading advocate for Open Access, Open Data, and Open Science. McKiernan will explore the powerful, positive benefits of openness in scholarly research, the tension between personal success as a researcher and Open Science, and the need for reform in our academic evaluation and incentive systems. Panel discussion following with University of Pittsburgh faculty members: Brian Beaton (Moderator) Assistant Professor, School of Information Sciences and Interim Director, Sara Fine Institute for Interpersonal Behavior and Technology; Gordon Mitchell, Associate Professor of Communication and Assistant Dean, University Honors College; Lara Putnam, Professor of History and former co-senior editor, Hispanic American Historical Review; Jackie Smith, Professor of Sociology and editor, Journal of World-Systems Research. More about Erin McKiernan: McKiernan is a postdoctoral fellow in the Department of Psychology at Wilfrid Laurier University, Waterloo, Ontario. Previously she served as a researcher affiliated with the National Institute of Public Health of Mexico, where she experienced firsthand the impact of cost barriers to accessing scholarly research. McKiernan has written about open access for international media outlets, such as The Conversation and The Guardian, and blogs about her experiences with Open Science. You can also follow her on Twitter at @emckiernan13.

    Why we publish where we do: Faculty publishing values and their relationship to review, promotion and tenure expectations

    Get PDF
    Using an online survey of academics at 55 randomly selected institutions across the US and Canada, we explore priorities for publishing decisions and their perceived importance within review, promotion, and tenure (RPT). We find that respondents most value journal readership, while they believe their peers most value prestige and related metrics such as impact factor when submitting their work for publication. Respondents indicated that total number of publications, number of publications per year, and journal name recognition were the most valued factors in RPT. Older and tenured respondents (most likely to serve on RPT committees) were less likely to value journal prestige and metrics for publishing, while untenured respondents were more likely to value these factors. These results suggest disconnects between what academics value versus what they think their peers value, and between the importance of journal prestige and metrics for tenured versus untenured faculty in publishing and RPT perceptions

    Conjoint analysis of researchers' hidden preferences for bibliometrics, altmetrics, and usage metrics

    Get PDF
    The amount of annually published scholarly articles is growing steadily, as is the number of indicators through which impact of publications is measured. Little is known about how the increasing variety of available metrics affects researchers' processes of selecting literature to read. We conducted ranking experiments embedded into an online survey with 247 participating researchers, most from social sciences. Participants completed series of tasks in which they were asked to rank fictitious publications regarding their expected relevance, based on their scores regarding six prototypical metrics. Through applying logistic regression, cluster analysis, and manual coding of survey answers, we obtained detailed data on how prominent metrics for research impact influence our participants in decisions about which scientific articles to read. Survey answers revealed a combination of qualitative and quantitative characteristics that researchers consult when selecting literature, while regression analysis showed that among quantitative metrics, citation counts tend to be of highest concern, followed by Journal Impact Factors. Our results suggest a comparatively favorable view of many researchers on bibliometrics and widespread skepticism toward altmetrics. The findings underline the importance of equipping researchers with solid knowledge about specific metrics' limitations, as they seem to play significant roles in researchers' everyday relevance assessments

    Use of the journal impact factor in academic review, promotion, and tenure evaluations

    Get PDF
    We analyzed how often and in what ways the Journal Impact Factor (JIF) is currently used in review, promotion, and tenure (RPT) documents of a representative sample of universities from the United States and Canada. 40% of research-intensive institutions and 18% of master’s institutions mentioned the JIF, or closely related terms. Of the institutions that mentioned the JIF, 87% supported its use in at least one of their RPT documents, 13% expressed caution about its use, and none heavily criticized it or prohibited its use. Furthermore, 63% of institutions that mentioned the JIF associated the metric with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. We conclude that use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and that there is work to be done to avoid the potential misuse of metrics like the JIF

    How significant are the public dimensions of faculty work in review, promotion and tenure documents?

    Get PDF
    Much of the work done by faculty at both public and private universities has significant public dimensions: it is often paid for by public funds; it is often aimed at serving the public good; and it is often subject to public evaluation. To understand how the public dimensions of faculty work are valued, we analyzed review, promotion, and tenure documents from a representative sample of 129 universities in the US and Canada. Terms and concepts related to public and community are mentioned in a large portion of documents, but mostly in ways that relate to service, which is an undervalued aspect of academic careers. Moreover, the documents make significant mention of traditional research outputs and citation-based metrics: however, such outputs and metrics reward faculty work targeted to academics, and often disregard the public dimensions. Institutions that seek to embody their public mission could therefore work towards changing how faculty work is assessed and incentivized

    How Faculty Define Quality, Prestige, and Impact of Academic Journals

    Get PDF
    Despite the calls for change, there is significant consensus that when it comes to evaluating publications, review, promotion, and tenure processes should aim to reward research that is of high "quality," is published in "prestigious" journals, and has an "impact." Nevertheless, such terms are highly subjective and present challenges to ascertain precisely what such research looks like. Accordingly, this article responds to the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact of academic journals? We address this question by surveying 338 faculty members from 55 different institutions in the U.S. and Canada. While relying on self-reported definitions that are not linked to their behavior, this study’s findings highlight that faculty often describe these distinct terms in overlapping ways. Additionally, results show that marked variance in definitions across faculty does not correspond to demographic characteristics. This study’s results highlight the subjectivity of common research terms and the importance of implementing evaluation regimes that do not rely on ill-defined concepts and may be context specific

    The value of data and other non-traditional scholarly outputs in academic review, promotion, and tenure in Canada and the United States

    Get PDF
    Academics are regularly involved in a wide range of activities spanning research, teaching and service, and the breadth of necessary outputs for review, promotion, and tenure (RPT) in each category only continues to grow. How do faculty manage their academic careers in the face of such growing sets of demands? Although we know that discussions of research assessment across the academy are increasingly recognizing the need to value the creation of outputs beyond research published in peer-reviewed journals, it is not clear whether these discussions have made their way into formal assessment structures. By analyzing the extent to which non-traditional outputs, including data and software, are mentioned in the RPT documents of a representative set of 129 universities from the United States and Canada, this chapter offers empirical evidence from across many disciplines of which types of faculty work are recognized in the RPT processes, and which are not. We confirm that traditional outputs such as peer-reviewed journal articles, book chapters and monographs are mentioned almost universally, whereas data-related items such as datasets and databases are mentioned only by a fraction of institutions. We find that research-intensive institutions acknowledge more types of research outputs in general, whereas institutions that focus more on undergraduate and master’s degree programs tend to mention fewer forms of scholarship in their RPT guidelines. Within research-intensive institutions, units from the life sciences present a greater range of outputs in the guidelines offered to faculty, including the 15% that explicitly mention data-related outputs. In contrast, none of the academic units in mathematics and physical and social sciences in our sample recognize data-related outputs, and generally recognize fewer forms. Overall, we conclude that many current structures for faculty assessment do not explicitly recognize the increasing complexity and demands of faculty work
    corecore