29 research outputs found

    Does collaborative research published in top journals remain uncited?

    Get PDF
    This paper investigates whether collaborative research published in top journals remains uncited, and to what extent access type (open and closed) affects on citation of collaborative research published in top journals. It looks at publications including articles, conference papers, reviews, short surveys, editorials, letters, notes published between 2009-2016 with an affiliation from Chalmers University of Technology and indexed in Scopus. To giveenough time to gather citation, two-year time frame is considered for the publication of the year 2016. The data is classified based on access types: closed and open access, and sub-classified as cited closed access, cited open access, non-cited closed access, and non-cited open access in SciVal. The top 25 percentile indicating the number of journals that are in the top 25% of the most cited journals indexed by Scopus is considered. The result showsthat a small portion of collaborative research published in top journals remain uncited irrespective of types of collaboration. In case of international collaborative research, publications in closed access are more cited than in open access. Institutional collaborative research publications are more cited than national collaborative ones. Collaborative research is more cited than single authors’ publications and single authored conference paperspublished in the top percentile do not remain uncited

    Signals in Science - On the Importance of Signaling in Gaining Attention in Science

    Get PDF
    Which signals are important in gaining attention in science? For a group of 1,371 scientific articles published in 17 demography journals in the years 1990-1992 we track their influence and discern which signals are important in receiving citations. Three types of signals are examined: the author’s reputation (as producer of the idea), the journal (as the broker of the idea), and the state of uncitedness (as an indication of the assessment by the scientific community of an idea). The empirical analysis points out that, first, the reputation of journals plays an overriding role in gaining attention in science. Second, in contrast to common wisdom, the state of uncitedness does not affect the future probability of being cited. And third, the reputation of a journal may help to get late recognition (so-called ‘sleeping beauties’) as well as generate so-called ‘flash-in-the-pans’: immediately noted articles but apparently not very influential in the long run

    Scrutinising uncitedness and few h-type indicators of selected Indian physics and astronomy journals

    Get PDF
    13-27The uncitedness of twelve Indian physics and astronomy journals over twelve years (2009-2020) time span is analysed here. Besides Uncitedness Factor (UF), three other indicators are discussed, viz., Time-normalized Citation per paper (CY), H-core Density (HD) and Time-normalised H-index (TH). The journal-wise variational patterns of these four indicators, i.e., UF, CY, HD and TH and the relationships of UF with the other three indicators are analysed. The calculated numerical values of these indicators are observed to formulate seven hypotheses, which are tested by the F-Test method. The average annual rate of change of uncited paper is found to be 67% of the total number of papers. The indicator CY is found temporally constant. The indicator HD is found to be nearly constant journal-wise over the entire time span, while the indicator TH is found to be nearly constant for all the journals. The UF inversely varies with CY and TH for the journals and directly varies with TH over the years. Except for a few Indian journals in physics and astronomy, the majority of the other journals face the situation of uncitedness. The uncitedness of Indian journals in this field is higher by 12% as compared to foreign journals in the same field, which indicates a possible poor circulation of the journals

    Signals in science - On the importance of signaling in gaining attention in science

    Full text link

    The success-index: an alternative approach to the h-index for evaluating an individual's research output

    Get PDF
    Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc

    What happens to our ideas? A bibliometric analysis of articles in Social Work in Health Care in the 1990s

    Get PDF
    Scholars spend a considerable amount of time reflecting upon their professional work. When individuals decide to communicate their professional thoughts beyond informal venues, the penultimate expression of their reflection is the peer reviewed journal article. The study reported here entailed a bibliometric analysis of articles appearing in the journal Social Work in Health Care during the 1990s, in order to better understand what happens to our ideas after they appear in a peer reviewed journal article. Final version of manuscript for citation: Rosenberg, G., Holden, G., & Barker, K (2005). What happens to our ideas? A bibliometric analysis of articles in Social Work in Health Care in the 1990s. Social Work in Health Care, 41, Ÿ, 35-66. © by The Haworth Press, Inc

    An assessment of the predictive validity of impact factor scores: Implications for academic employment decisions in social work

    Get PDF
    Citation for final version: Holden, G., Rosenberg, G., Barker, K., & Onghena, P. (2006). An assessment of the predictive validity of impact factor scores: Implications for academic employment decisions in social work. Research on Social Work Practice, 16, 6, 613-624.Objective: Bibliometrics is a method of examining scholarly communications. Concerns regarding the utility of bibliometrics in general, and the impact factor score (IFS) in particular, have been discussed across disciplines including social work. While there are frequent mentions in the literature of the IFS as an indicator of the impact or quality of scholars’ work, little empirical work has been published regarding the validity of such use. Method: A proportionate, stratified, random sample, of n=323 articles was selected from 17 Web of Science listed social work journals published during the 1992-1994 period. Results: The relationship between journals’ impact factor scores and the actual impact of articles published in those journals (predictive validity) was r = .41 (short term) and r = .42 (long term). Conclusion: The practice of using the IFS as a proxy indicator of article impact merits significant concern as well as further empirical investigation. The final, definitive version of this article has been published in Research on Social Work Practice, 16, 6 © SAGE Publications Ltd at the Research on Social Work Practice page: http://rswp.sagepub.com/ on SAGE Journals Online: http://online.sagepub.com

    An assessment of the predictive validity of impact factor scores: Implications for academic employment decisions in social work

    Get PDF
    Citation for final version: Holden, G., Rosenberg, G., Barker, K., & Onghena, P. (2006). An assessment of the predictive validity of impact factor scores: Implications for academic employment decisions in social work. Research on Social Work Practice, 16, 6, 613-624.Objective: Bibliometrics is a method of examining scholarly communications. Concerns regarding the utility of bibliometrics in general, and the impact factor score (IFS) in particular, have been discussed across disciplines including social work. While there are frequent mentions in the literature of the IFS as an indicator of the impact or quality of scholars’ work, little empirical work has been published regarding the validity of such use. Method: A proportionate, stratified, random sample, of n=323 articles was selected from 17 Web of Science listed social work journals published during the 1992-1994 period. Results: The relationship between journals’ impact factor scores and the actual impact of articles published in those journals (predictive validity) was r = .41 (short term) and r = .42 (long term). Conclusion: The practice of using the IFS as a proxy indicator of article impact merits significant concern as well as further empirical investigation. The final, definitive version of this article has been published in Research on Social Work Practice, 16, 6 © SAGE Publications Ltd at the Research on Social Work Practice page: http://rswp.sagepub.com/ on SAGE Journals Online: http://online.sagepub.com
    corecore